Take this dystopian science-fiction story, in which a major military power is using machine intelligence to identify potential threats, which it then eliminates using unmanned drones.
The twist of the story is that even a very accurate algorithm can lead to unintended consequences when the actual threat ratio is very low. This is a classic problem known from statistics.
Imagine that out of a population of a hundred million, only 100 people represent a threat, and the algorithm is 99% accurate identifying them.
Which means that out of the 100 threats, it will miss only 1. So far, so good.
Unfortunately, it also means that out of the remaining 99,999,900, it will falsely identify 999,999 as threats even when they aren’t. So out of the 1,000,098 people who are targeted, onl 99 are genuine threats; the remaining 999,999 are innocent.
OK, improve the algorithm. Perhaps at the expense of having more false negatives, say, 50%, increase the accuracy to 99.99% when it comes to false positives. Now you have 50 of the real threats identified, and you’re still targeting 10,000 innocent people.
Now imagine that the military power in question somehow convinces itself that this algorithmic approach to security is still a good idea, and implements it in practice.
And now stop imagining it. Because apparently this is exactly what has been taking place with the targeting of US military drones in Pakistan, with the added twist that the science behind the algorithms might have been botched.
Oh, but a human is still in the loop… rubber-stamping a decision that is made by a machine, and is carried out by other machines, eliminating possibly several thousand innocent human beings.
As I said… welcome to Skynet, the dystopian network of homicidal machine intelligence from the Terminator movies.
Scared yet? Perhaps you should be. We should all be.