New AI Targets Get Share Of $12 Million Hacker Bounty | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #hacker

Google has confirmed that it is expanding the existing vulnerability rewards program (VRP) to embrace attack scenarios that feature generative AI. The newly amended bug bounty program encourages hackers to explore attack scenarios and uncover vulnerabilities as they apply to Google’s AI systems and services.

Google’s AI Red Team Mimics Real Hack Attacks

In August, Google announced that it had created an AI Red Team to use the same kinds of attack methodologies as exploited by a nation-state, organized cybercrime groups and malicious insiders. “One of the key responsibilities of Google’s AI Red Team is to take relevant research and adapt it to work against real products and features that use AI to learn about their impact,” Daniel Fabian, head of Google Red Teams, said. “We leverage attackers’ tactics, techniques and procedures (TTPs) to test a range of system defenses.”

Google AI Bug Bounty Hackers Must Play By The Rules

Now, hackers outside the AI Red Team and external to Google itself, can look for the weak points in Google’s AI systems. The difference is that these hackers must work within a strict framework that defines what is in or out of scope. They will not have the same ‘anything goes’ approach to attack simulation, but that doesn’t make the AI bounty-hunting hackers any less vital. Like any bug bounty program, there are guidelines as to what type of vulnerabilities Google is looking to expose, the methods that can be used to do so and the process for both reporting and getting paid for those that are found. So, for example, prompt injections that are invisible to victims and change the state of the victim’s account or any of their assets are in scope. Using a product to generate violative, misleading, or factually incorrect content, including ‘hallucinations’ and factually inaccurate responses, is not. Equally, contexts where an adversary could reliably trigger a misclassification of a security control to be abused for malicious use is in scope, but where this does not pose a “compelling attack scenario or feasible path to Google or user harm” isn’t.

MORE FROM FORBESNo, 1Password Has Not Just Been Hacked-Your Passwords Are Safe

A $12 Million Bug Bounty Bonanza

Google has confirmed that while bounties will be paid for vulnerabilities disclosed under the VRP umbrella, the amount of those rewards will depend upon the “severity of the attack scenario and the type of target affected.” In 2022, however, more than $12 million was paid in such bounties to hackers who were part of the broader program.

“We look forward to continuing our work with the research community to discover and fix security and abuse issues in our AI-powered features,” a Google spokesperson said, “If you find a qualifying issue, please go to our Bug Hunter website to send us your bug report and if the issue is found to be valid be rewarded for helping us keep our users safe.”

Google’s confirmation of the new AI bug bounty program could not be more timely. Announcing a global AI safety summit and a proposed AI Safety Institute, the U.K. Prime Minister Rishi Sunak gave a speech on October 26 in which he said that “Criminals could exploit AI for cyber-attacks, disinformation, fraud, or even child sexual abuse.”

“Generative AI is a double-edged sword, as the cybersecurity landscape continues to evolve the proliferation of generative AI only adds further complexity to the mix,” Fabian Rech, a senior vice president at Trellix, said. “With the first AI Safety Summit launching next week, its vital for organizations to be aware what this will mean for the future of regulation with this emerging technology, and how businesses can be expected to utilise and integrate it.”

MORE FROM FORBESAndroid Users Warned Of 2 Zero-Day Exploits, Including Spy-On-Phone Attack

Follow me on Twitter or LinkedIn. Check out my website or some of my other work here. 


Click Here For The Original Story From This Source.

National Cyber Security