GET THE FREE NATIONAL CYBER SECURITY APP FOR YOUR PHONE AND TABLET
SUMMER IS THE Oscar season of hacking. At conferences like Black Hat, Defcon, Summercon, HOPE, and Usenix, benevolent hackers seeking fame and prestige—and occasionally the dream of making the world more secure—show a global audience what they can do. This year they showed us that they can hack your car.
In a seemingly non-stop series of proof-of-concept attacks over the last three months, security researchers demonstrated everything from unlocking doors and turning on windshield wipers to jerking steering wheels, disabling brakes and even paralyzing a Jeep on a highway with me inside. To anyone paying attention to all those headlines, it may have seemed like the cyberautopocalypse.
But that vehicular doomsday hasn’t actually arrived yet. All of this troubling research should instead serve as the harbinger of a future where digital carjackings are real and even commonplace. Luckily these warnings have come far before any real-world auto hacks with flesh-and-blood consequences. So now that fall is here, let’s take a look back at the summer of 2015, which will forever be known as the Summer of Epic Car Hacking, to filter out the fear and find some lessons instead. Here’s what we’ve learned.
1. Car Companies Need Hackers
When hackers Charlie Miller and Chris Valasek showed that they could gain full, remote control of a 2014 Jeep Cherokee to disable its brakes and transmission, Chrysler responded in a not-very-reassuring statement that it employs an “Embedded System Quality Engineering team dedicated to identifying and implementing software best practices across FCA globally.”
But that team clearly hadn’t found or fixed any of the security vulnerabilities in a chain that led all the way from the internet connection in the car’s dashboard computer to its critical driving systems. To truly secure vehicles from hackers as smart as Miller and Valasek, car companies’ security teams will have to hire them. And they’ll have to give those hackers a chance to review code for security bugs before it’s deployed.
“There’s no way to get out of this cat and mouse game,” says Stefan Savage, a computer scientist at the University of California San Diego who led a team that did its own breakthrough car hacking research in 2010 and 2011. “There’s always something you haven’t thought about. You need a style of building software that now basically requires a whole lot of security people looking at it to stay ahead of all of your potential adversaries.”
Aside from hiring hackers, automakers also need to befriend them: Vulnerability disclosure programs, for instance, encourage outside security researchers to report bugs in software. Most smart tech companies now go even further, with a “bug bounty” that pays researchers for information about exploitable vulnerabilities in their code. Among carmakers today, only Tesla has such a program. The vast majority of car makers don’t even have a web page that tells hackers how to report security vulnerabilities, not to mention a standing offer to pay them for that research.
Jeep hackers Miller and Valasek are a prime example of the generally frosty relationship between hackers and car companies today. Despite the vehicle-hacking virtuosity displayed in their three years of research, neither Chrysler nor any other car company has so much as invited them over for a cup of coffee. “You might have thought that after our Jeep hack, they would call us for advice or to try to hire us,” says Miller. “None of them did.”
2. Cars Need Immune Systems
There will be bugs. And even if car companies start hiring hackers in droves or otherwise dramatically improving their security auditing skills, they won’t find them all. They also need the ability to deal with those bugs when they’re exposed and exploited.
That means carmakers need to architect cars so that the damage from any single bug is limited, says Josh Corman, a co-founder of I Am the Cavalry, a grassroots security non-profit devoted to bridging the gap between hackers and industry. They need features in cars that detect, record, and respond to attacks in real time. And they need systems that fix bugs, fast.
All of those requirements are outlined in a set of five recommendations I Am The Cavalry published last year calling on car companies to implement features on car networks like segmentation and isolation of sensitive car components—internet-connected infotainment systems shouldn’t be able to talk to brakes or transmission, for instance—and internal logging. “The best secure development and bug bounty programs in the world are still missing dozens of bugs a month,” says Corman. “We want auto companies to be prepared for failure…A strategic posture towards failure is table stakes.”
A year before their Jeep hack, for instance, Miller and Valasek showed off a prototype for an intrusion detection system for cars: a small, cheap device designed to plug into a car’s network, watch for suspicious digital signals, and if necessary put the car into a safe “limp mode” that disables computer features to stop a hack-in-progress.
“If that had been plugged into the Jeep, we wouldn’t have been able to attack it,” says Miller. Without that sort of feature, he adds, “if you’re attacked you have no idea it’s happened and no way to react to it…You’re completely blind.”
More important still may be the ability to patch a security flaw after it’s found in cars on the road. Despite their increasing connections to the internet, most connected vehicles today still don’t have the ability to receive so-called “over-the-air” updates that patch vehicles’ software via their cellular internet connections. Chrysler responded to Miller and Valasek’s Jeep hack by mailing an update on a USB drive to 1.4 million customers—hardly the most secure patching technique. And when researchers told GM in 2010 about an attack that could take over millions of vehicles via their OnStar connections, WIRED discovered that the auto giant was so flummoxed that it took nearly five years for it to fully fix the problem.
3. Car Cybersecurity Goes Beyond Cars
Car companies aren’t to blame for every car hacking bug. A team of University of California at San Diego security researchers proved this in August when they hacked a common gadget offered by car insurance companies, designed to be plugged into the OBD2 port on a car’s dashboard to track the car’s speed and acceleration in exchange for lower insurance rates. By compromising that tiny, insecure, internet-connected dongle via text message, the researchers were able to access a Corvette’s internal network and disable its brakes. “Think twice about what you plug into your car,” UCSD researcher Karl Koscher warned at the time.
Internet-connected devices, in other words, can offer hackers inroads into cars, too—even cars that aren’t themselves connected to the internet. Koscher’s team at UCSD and the University of Washington illustrated that point back in 2011 when they showed that an attack could spread from a car dealership’s Wi-Fi network to its diagnostic tools, which then spread the attack when technicians plugged the infected tools into vehicles. And car hacker Craig Smith revived that notion last month when he showed off a device for finding vulnerabilities in those dealership tools that could turn them into an “auto brothel.” With an exploit targeting that sort of diagnostic tool bug, an infected car could install malware on a shop’s promiscuous repair equipment, which would then spread it to every vehicle that comes in for service.
Smartphones may offer an even easier avenue into a vehicle’s innards: Independent hacker Samy Kamkar demonstrated that in August when he revealed a device that could be planted on a vehicle to intercept the credentials from a user’s remote unlocking and ignition app and send them to a hacker. With those credentials in hand, the attacker would be able to track a target car’s location, unlock its doors, or even start its ignition. The hack sent GM, Chrysler, BMW and Mercedes-Benz scrambling for a software fix. And it shows how the gadgets in a car’s orbit can leave it just as vulnerable as a flaw in the car’s own code.
4. Customers Need to Pay Attention…
Car companies will start paying for security when they start to pay for insecurity. If consumer shift their spending to companies that keep cars safe from hackers, car companies will shift their budgets to follow.
Right now, sorting out which car to buy based on digital security features is admittedly difficult. In 2014, Charlie Miller and Chris Valasek published a speculative list of the most hackable cars, trucks and SUVs based on analyses of their architectures and mechanical specs. But even Miller admits that the list isn’t definitive enough to make solid buying choices. “Ultimately it would be nice to have consumers buy things that are more secure, but I don’t know how to do that,” Miller says. “Car companies aren’t transparent in what they do.”
But some companies are starting to distinguish themselves. Tesla, for instance, should get credit with consumers for its over-the-air security updates to fix security vulnerabilities, like the one it issued to patch six vulnerabilities that a pair of hackers revealed at the Defcon hacker conference. It offers a $10,000 bug bounty program. And it’s employed world-class security minds like Chris Evans, who led Google’s Project Zero team of bug-hunting hackers, and before him Kristin Paget, who previously held the title of “hacker princess” at Apple. All of that should earn points with geek buyers who want a secure vehicle.
If a few congressmen have their way, factoring a vehicle’s security into a car-buying decision may soon get easier. On the same day as Miller and Valasek’s Jeep hack, senators Ed Markey and Richard Blumenthal released a bill to create a report card for security and privacy that would be posted on a car’s window at the dealership along with other data. Miller approves. “It’s a public safety issue, and the car companies aren’t coming forward and telling us what they’re doing,” he says. “There’s nothing else we can do as consumers to protect ourselves. I think it’s a perfect opportunity for the government to mandate some changes.”
5. …And So Do Regulators
The National Highway and Traffic Safety Administration, the government agency most directly responsible for making cars safe from hackers, took an unprecedented step when it pressured Chrysler to recall 1.4 million vehicles to fix the security vulnerabilities Miller and Valasek exposed. But there should have been a precedent: Five years earlier the UCSD and UW researchers had briefed the NHTSA on equally serious security risks affecting millions of GM cars and trucks. In that case, the NHTSA snoozed while GM implemented half-measure protections and then took half a decade to fully solve the problem. When WIRED asked NHTSA for comment on that incident, a spokesperson claimed that the GM attack had required physical access, which isn’t true.
On other software update issues, the NHTSA has erred in the other extreme: It sparred with Tesla over the company’s cellular software updates to fix a battery-fire problem, arguing that the update should have warranted a formal recall. That’s a problematic idea of how software works: If Microsoft were forced to declare a recall every Patch Tuesday, that would be a lot of recalls.
The new world of car hacking signals that the NHTSA will have to find a balance between demanding nonstop recalls for every bug fix and letting car companies leave gaping vulnerabilities open to exploitation for months or years. The pressure it put on Chrysler to quickly fix its bugs last July, at least, show that the agency is no longer asleep at the wheel.