For a long time, information security has been a war between men but recently it has become a battle between man and machine. The development of AI is slowly moving this fight into a new environment: machine versus machine, carefully directed by scientists or hackers.
A number of cybersecurity companies are now turning to machine learning in an attempt to stay one step ahead of professionals working to steal industrial secrets, disrupt national infrastructures, hold computer networks for ransom and even influence elections.
A study from 2016 uncovered that information theft is the primary concern of companies. Yet, over half of them (5
8%) don’t have the necessary systems in place to detect a sophisticated attack, which is explained by the fact that 42% don’t have a threat detection program and 18% have no information security strategy at all.
At its most basic, machine learning for security involves feeding massive amounts of data to the AI program, which the software then analyzes to spot patterns and recognize what is, and isn’t, a threat. If you do this millions of times, the machine becomes smart enough to prevent intrusions and malware on its own.
Machine learning naysayers argue that hackers can write malware to trick AI. Sure the software can learn really fast, but it stumbles when it encounters data its creators didn’t anticipate. It makes a good case against relying on AI for cybersecurity, where the stakes are so high.
Sure machines can help humans fight the scale and speed of attacks, but it’ll take years before they can actually call the shots. That’s because the model for AI right now is still data cramming.
Cybersecurity powered by AI is just the natural step in protecting vulnerable data. The race between those aiming to create safe systems and attackers is crossing into new territory, but machines are far away from taking the lead.