Crypto Security Warning: AI Routers Found Vulnerable to Private Key Theft #AI


  • Researchers said AI-based intermediary routers have four attack paths, including malicious code injection and credential theft.
  • They found that some routers could expose private keys and seed phrases through plain-text processing, and observed asset outflows from a test Ethereum (ETH) wallet.
  • The researchers recommended that developers avoid directly entering private keys and seed phrases in AI environments and, over the longer term, adopt a cryptographic verification framework.

Forecast Trend Report by Period

Loading Indicator

Photo: Shutterstock
Photo: Shutterstock

Research has found that intermediary routers used in AI-based development environments may contain security vulnerabilities that could lead to cryptocurrency theft.

Cointelegraph reported on June 13 that researchers at the University of California, in a paper analyzing the supply chain for large language model, or LLM, routers, identified four attack paths, including malicious code injection and credential theft.

According to the study, some routers may process data in plain text while relaying user requests, raising the possibility that sensitive information such as private keys and seed phrases could be exposed externally. The researchers added that the risk may increase when AI coding tools are used for smart-contract development or wallet management.

In experiments analyzing 400 free routers and 28 paid routers, the team found cases in which some routers inserted malicious code or accessed external credentials. It also observed asset outflows from a test Ethereum wallet during the process.

The analysis showed that even routers that appear normal can create security risks by reusing leaked credentials or executing commands without user confirmation.

The researchers said those structural characteristics make malicious behavior difficult to identify in advance. Because routers can read sensitive information even during normal data transmission, it can be hard to distinguish legitimate processing from theft.

They recommended that developers strengthen security measures in AI environments, including by avoiding the direct entry of sensitive information such as private keys or seed phrases. Over the longer term, the paper called for a cryptographic verification framework for AI responses.



Click Here For The Original Source.

——————————————————–

..........

.

.

National Cyber Security

FREE
VIEW