(844) 627-8267
(844) 627-8267

Making AI Chatbots Say Terrible Things | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #hacker

Last week, the six biggest companies in AI presented a unique challenge to hackers: to make their chatbots say the most terrible things. Held as part of Def Con, the world’s largest hacker conference, the contest aimed to identify flaws in the chatbots through “prompt injections.” Instead of looking for software vulnerabilities, hackers were tasked with confusing the chatbots by entering prompts that would result in unintended responses.

Among the participating chatbots were Google’s Bard, OpenAI’s ChatGPT, and Meta’s LLaMA. The event saw a considerable turnout, with around 2,000 hackers estimated to have participated over the weekend. Sven Cattell, founder of the AI Village (the nonprofit organization that hosted the event within Def Con), highlighted the need for more testing of these chatbots, as there currently aren’t enough people dedicated to identifying flaws.

Generative AI chatbots, also known as large language models, generate responses based on user prompts. While these bots are capable of performing various tasks, they can often provide incorrect or false information. Companies have been developing these bots for years, but the rush to release improved versions intensified after the viral success of ChatGPT3.

The contest aimed to test the chatbots’ ability to handle different types of interactions. The companies behind the bots wanted to ensure that their products could reliably respond in innocent conversations. Categories for tricking the bots included using demographic stereotypes, providing false legal information, and convincing the bots to believe they were sentient rather than AI.

Rumman Chowdhury, a trust and safety consultant who oversaw the contest, highlighted the need for these chatbots to interact accurately in order to be marketable products. The companies saw the hacker community as a valuable resource for testing, as they bring diverse perspectives and expertise that may not be present in the companies’ staff.

While there were limits to the hackers’ access to the chatbot systems and the published results won’t be available until February, the hackers successfully managed to get the chatbots to generate clearly false responses. However, attempts to defame celebrities by associating them with criminal activities failed.

Chowdhury emphasized that ensuring factual accuracy for these chatbots is an immense challenge. It’s a problem that extends beyond generative AI and one that social media companies have also struggled with in terms of misinformation. Determining what constitutes misinformation in gray areas like vaccines and controversial topics is subjective and complex.

In conclusion, the challenge presented at Def Con highlighted the importance of objectively testing and identifying flaws in AI chatbots to improve their reliability and accuracy in various interactions.


Click Here For The Original Story From This Source.

National Cyber Security