Google CEO Sundar Pichai Says AI Can Help Against Cybercrime | #cybercrime | #infosec

Google CEO Sundar Pichai Says That AI Can Help Against Cybercrime

At the Munich Security Conference this week, Google CEO Sundar Pichai addressed fears surrounding the impact of AI on cybersecurity and said that AI can actually help against cybercrime.

He acknowledged the fact that the sophistication of technology has led the way for more advanced attacks. We have seen the results ourselves. For instance, ransomware attack payments amounted to $1.1 billion in 2023, breaking all previous records—and this number doesn’t even include the damages and long-term costs.

We are right to be worried about the impact on cybersecurity. But AI, I think actually, counterintuitively, strengthens our defense on cybersecurity.Sundar Pichai, Google CEO

Sundar Pichai added that AI will do away with “Defender’s dilemma” which basically means that attackers only have to succeed once to attack whereas the defender will have to succeed every time in order to protect the system. But for the first time, AI will help the defenders scale up their defense mechanisms.

Pichai said that AI can also help you detect attacks and respond to them more quickly.

Speaking of his own company’s products as example, Pichai said that apps like Gmail and Chrome are already using AI to amp up security. On top of that, to make AI-enabled security accessible to all, Google announced a new initiative last week under which it’ll provide a free, open-source tool called Magika—a tool designed to help users find malware more quickly.

Pichai isn’t the only one to share these beliefs about AI. For instance, the president of security at IT services and consulting firm DXC Mark Hughes said that we are already starting to see the benefits of AI in helping engineers reverse attacks quickly. In his words, AI helps you run faster than your adversary, and if you can do that, you’ll definitely be doing better than them.

Is AI Really A Foolproof Solution? 

The irony of Sundar Pichai’s confidence is that while he was cheering on AI, his own company’s product Gemini was on the news—and not for a good cause. This is because some users found that the tool’s insane power of creating never-seen-before images could actually be misused to create culturally wrong images.

Read more: Google promises to fix its AI image bot after it was accused of being too woke

A few examples included Asians dressed as Nazis, then there was another incident where when a user asked the bot to create a picture of the founding fathers of America, it created a picture including women and people of color.

In fact, the companies that are competing in this AI race themselves know the risks it poses. For that very reason, a group of 20 large organizations including Google, OpenAI, and Microsoft have signed an agreement to keep AI away from the 2024 elections.

So, what remains a big question is whether a bot that can’t even get historical and cultural contexts right and is recognized as a threat to the democratic nature of elections is actually capable of offering a long-term solution in fighting against cybercrime.

Source link


Click Here For The Original Source.


National Cyber Security