The internet was created nearly 40 years ago by men — and a few women — who envisioned an “intergalactic network” where humans could pull data and computing resources from any mainframe in the world and in the process free up their minds from mundane and menial tasks.
“The hope is that, in not too many years, human brains and computing machines will be coupled,” wrote Joseph Carl Robnett Licklider, who was known as “Lick” and is the man widely remembered as the Internet’s Johnny Appleseed. Mr. Licklider joined the Pentagon in 1962, and his ideas later formed the basis for the military’s primordial internet work.
Even a big-vision idealist like Mr. Licklider could never have imagined that more than 50 years later, we would be telling the internet our deepest secrets and our whereabouts, and plugging in our smartphones, refrigerators, cars, oil pipelines, power grid and uranium centrifuges.
And even the early internet pioneers at the Pentagon could not have foreseen that half a century later, the billions of mistakes made along the way to creating the internet of today and all the things attached to it would be strung together to form the stage for modern warfare.
It is rare to find a computer today that is not linked to another, that is not baked with circuitry, applications and operating systems and that has not — at one point or another — been probed by a hacker, digital criminal or nation looking for weaknesses to exploit for profit, espionage or destruction.
There is plenty of raw material to work with. On average, there are 15 to 50 defects per 1,000 lines of code in delivered software, according to Steve McConnell, the author of “Code Complete.” Today, most of the applications we rely on — Google Chrome, Microsoft, Firefox and Android — contain millions of lines of code. And the complexity of technology is increasing, and with it the potential for defects.
The motivation to find exploitable defects in widely used code has never been higher. Governments big and small are stockpiling vulnerabilities and exploits in hardware, software, applications, algorithms and even security defenses like firewalls and antivirus software.
They are using these holes to monitor their perceived enemies, and many governments are storing them for a rainy day, when they might just have to drop a payload that disrupts or degrades an adversary’s transportation, energy or financial system.
They are willing to pay anyone who can find and exploit these weaknesses top dollar to hand them over, and never speak a word to the companies whose programmers inadvertently wrote them into software in the first place.
The world caught one of its first glimpses of the market for vulnerabilities this year when James B. Comey, the director of the Federal Bureau of Investigation, suggested that his agency paid hackers more than $1.3 million for an iPhone exploit that allowed the F.B.I. to bypass Apple’s security.
That is on par with what other companies that buy and sell bugs to governments, like Zerodium, have offered to pay. Zerodium said it paid hackers $1 million for information on weaknesses in Apple’s iOS 9 operating system last fall, but the company resells those weaknesses to governments at a markup.
Those who follow the bug-and-exploit trade market closely caught an even bigger glimpse of its sponsors last summer when an Italian outfit called Hacking Team — which packages weaknesses into surveillance tools for governments across the globe — was itself hacked.
The leaks revealed a long customer list, including police departments, law enforcement and intelligence agencies in the United States, Europe and countries like Bahrain, Ethiopia, Sudan, Uzbekistan, Kazakhstan, Azerbaijan and Morocco.
But the market for exploitable bugs is much bigger than Hacking Team’s client list, and nations have been paying huge sums to hackers willing to turn over those weaknesses to governments, and withhold them from software companies, for more than 20 years.
In most cases those holes have been used for espionage, but increasingly they are being used for destruction. Stuxnet, the American-Israeli computer worm that was used to destroy centrifuges at Iran’s Natanz nuclear facility in 2009 and 2010, used four vulnerabilities in Microsoft Windows and one in a printer service to attack and spin Iran’s uranium centrifuges out of control, or stop spinning them entirely.
Once Stuxnet and its motivations were uncovered — first by a security researcher in Belarus and then around the world — a Pandora’s box was opened.
Today, more than 100 governments have publicly acknowledged their own offensive cyberwar programs. Countries that were not in the market before Stuxnet was discovered are in it now.
Iranian officials now claim to have the third-largest digital army in the world behind the United States and China. Those claims are impossible to verify, in large part because most countries keep such programs secret. But Iranian hackers have made plenty of demonstrations.
Government officials in the United States hold Iranian hackers responsible for what they describe as a retaliatory attack against Saudi Aramco in 2012 that replaced the data on 30,000 Aramco computers with an image of a burning American flag.
The next year, Iranian hackers were blamed for a series of attacks on the United States banking system. And while security experts who have analyzed those attacks claim that the Iranians’ abilities are still nowhere near those of the United States and its closest allies, they are steadily improving.
Nations took a while to catch on to the wartime potential of the internet, but countries are now doubling down on their digital attack capabilities.
The verdict is still out on whether attacks like Stuxnet violate international law. Digital espionage, like the Chinese hacking of the Office of Personnel Management discovered last year, does not. And even when such attacks violate domestic laws, the penalties are not much of a deterrent to attackers punching keystrokes from the other side of the world.