(844) 627-8267 | Info@NationalCyberSecurity
(844) 627-8267 | Info@NationalCyberSecurity

How a Russian spyware company ‘hacked’ ChatGPT, turned it to spy on internet users | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #hacker

A Russian spying company that has an expertise in online hacking and spying was able to bypass OpenAI’s ChatGPT turn it into a spyware for spying on people who use the internet. The spying company was involved in sentiment analysis and hacking

In a recent investigative report, Forbes revealed that Social Links, a Russian spyware company previously banned from Meta’s platforms for alleged surveillance activities, has co-opted ChatGPT for spying on people using the internet.

This unsettling revelation of ChatGPT which involves collecting and analyzing social media data to gauge users’ sentiments, adds yet another controversial dimension to ChatGPT’s use cases.

Presenting its unconventional utilization of ChatGPT at a security conference in Paris, Social Links showcased the chatbot’s proficiency in text summarization and analysis. By feeding data, obtained through its proprietary tool, related to online discussions about a recent controversy in Spain, the company demonstrated how ChatGPT could quickly process and categorize sentiments as positive, negative, or neutral. The results were then presented using an interactive graph.

Privacy advocates, however, find this development deeply troubling. Beyond the immediate concerns raised by this specific case, there is a broader worry about the potential for AI to amplify the capabilities of the surveillance industry.

Rory Mir, Associate Director of Community Organizing at the Electronic Frontier Foundation, expressed apprehension that AI could enable law enforcement to expand surveillance efforts, allowing smaller teams to monitor larger groups more efficiently.

Mir highlighted the existing practice of police agencies using fake profiles to infiltrate online communities, causing a chilling effect on online speech. With the integration of AI, Mir warned that tools like ChatGPT could facilitate quicker analysis of data collected during undercover operations, effectively enabling and escalating online surveillance.

A significant drawback noted by Mir is the track record of chatbots delivering inaccurate results. In high-stakes scenarios like law enforcement operations, relying on AI becomes precarious.

Mir emphasized that when AI influences critical decisions such as job applications or police attention, biases inherent in the training data—often sourced from platforms like Reddit and 4chan—become not just factors to consider but reasons to reconsider the use of AI in such contexts.

The opaque nature of AI training data, referred to as the “black box,” adds another layer of concern. Mir pointed out that biases from the underlying data, originating from platforms notorious for diverse and often extreme opinions, may manifest in the outputs of the algorithm, making its responses potentially untrustworthy.

The evolving landscape of AI applications in surveillance raises important questions about ethics, biases, and the potential impact on individual freedoms and privacy.

(With input from agencies)

Published on: November 20, 2023 13:26:06 IST


Click Here For The Original Story From This Source.

National Cyber Security