[ad_1]
Jakarta – Anthropic announced on Wednesday 27 August that it managed to detect and block hackers’ attempts to abuse Claude’s artificial intelligence (AI) system in making phishing emails, producing malicious codes, and trying to trick security filters.
The findings, published in a report, highlight growing concerns that AI tools are increasingly being exploited for cybercrime, prompting calls to tech companies and regulators to strengthen security measures as this technology spreads.
Anthropic’s report states that their internal system succeeded in stopping these attacks. The company also shared a case study showing how the perpetrators attempted to use Claude to produce harmful content, to help other parties understand the risks.
The report noted an attempt to use Claude in compiling customized phishing emails, writing or correcting malicious code cuts, and trying to outsmart security through repeated commands.
In addition, there are also efforts to script influence campaigns by producing mass persuasive posts and helping hackers with low expertise through step-by-step guidance.
The companies supported by Amazon.com and Alphabet do not publish technical indicators such as IP addresses or specific orders, but state that they have blocked the accounts involved and tightened security filters after detecting these activities.
Security experts say that criminals are increasingly turning to AI to make fraud more convincing and speed up hacking attempts. These AI tools can help structure realistic phishing messages, automate part of malware development, and even potentially help in planning attacks.
Security researchers warn that as AI models become more sophisticated, the risk of abuse will increase if companies and governments do not act immediately. Anthropic stated that they implemented strict security practices, including routine testing and external reviews, as well as planning to continue to publish reports when finding major threats.
Companies like Microsoft, SoftBank-backed OpenAI, and Google are also facing similar highlights over concerns that their AI models could be exploited for hacking or fraud, prompting calls for stronger security.
The government has also begun to take steps to regulate this technology. The European Union is progressing with the Artificial Intelligence Act, while the United States is pushing for voluntary security commitments from key developers.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language.
(system supported by DigitalSiber.id)
[ad_2]
Source link
Click Here For The Original Source.