Info@NationalCyberSecurity
Info@NationalCyberSecurity

Concerns over AI Chatbots’ Security Highlighted at DefCon hacking convention | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #hacker


White House officials, alarmed by the potential societal harm posed by AI chatbots, are closely following a three-day competition at the DefCon hacker convention in Las Vegas. More than 3,500 participants at the event are attempting to uncover vulnerabilities in eight leading large-language models. The competition is the first-of-its-kind “red-teaming” exercise for multiple AI models. However, the results of the competition will not be made public until February. It is expected that addressing the flaws in these models, which have been found to be prone to racial and cultural biases and easily manipulated, will require significant time and monetary investments.

Current AI models have been deemed unwieldy, brittle, and malleable by academic and corporate research. Security was an afterthought during their development, resulting in systems with numerous vulnerabilities. Chatbots from OpenAI and Google, such as ChatGPT and Bard, were trained using billions of data points from the internet, making them perpetual works-in-progress. As a consequence, these models have had to deal with numerous security vulnerabilities uncovered by researchers and tinkers.

Security experts have expressed concerns about the lack of safeguards in place for these AI systems. Researchers at Carnegie Mellon University have identified that poisoning a small fraction of the data used to train AI systems can cause significant havoc. The authors of “Not with a Bug but with a Sticker,” a book on AI security, also highlight examples of AI systems being tricked into misinterpreting commands, such as an AI assistant ordering 100 pizzas after hearing a Beethoven concerto clip.

Furthermore, the authors’ survey of over 80 organizations found that the vast majority had no response plan for data poisoning or dataset theft. While major AI companies have committed to prioritizing security and safety by allowing outside scrutiny of their models, there are concerns that these measures may not be sufficient.

Experts warn that AI chatbots pose risks to privacy and the security of sensitive information. There is a fear that malicious actors could exploit AI system vulnerabilities to extract personal data from supposedly secure systems, such as hospitals, banks, and employers. Additionally, AI language models can retrain themselves using junk data, which can lead to pollution of the models’ outputs.

In conclusion, the competition at DefCon highlights the urgent need to address the security vulnerabilities inherent in AI chatbots. With their transformative potential for society, it is crucial to invest in research and development to ensure their safety and security.

——————————————————–


Click Here For The Original Story From This Source.

National Cyber Security

FREE
VIEW