Info@NationalCyberSecurity
Info@NationalCyberSecurity

Staying Safe from Chatbot Scams: Your Ultimate Guide | #DatingScams | #LoveScams | #RomanceScans


AI home assistant bot risks and vulnerabilities

Chatbot threats aren’t only online – most Americans have already opened their homes to very similar interfaces. The same conversational AI powering internet chatbots is coded into virtual personal assistants (VPA) like Amazon’s Alexa, Google’s Home, and Apple’s Siri.

More than half of respondents in a survey we conducted own an AI assistant, meaning that 120 million Americans share personal information with such devices regularly. Amazon’s Alexa is the most popular model today.

Home assistants offer convenience by handling household operations with an understanding of our personal needs and preferences. That functionality also makes the interfaces a risk to privacy and security. Appropriately, not everyone trusts them.

More than 90 percent of users have doubts about home assistant security, and less than half have any level of confidence in them.

How confident are you that companies have taken sufficient action to ensure that their AI voice-activated home assistants are secure

This skepticism is justified. Voice-activated assistants lack many protocols that can foil a browser-linked scambot. Rather than requesting password logins through verifiable web pages, assistants can accept commands from anyone without visual confirmation of their remote connections. This process allows for fraud on either end of the equation. Including always-on microphones completes a recipe for potential disaster.

Some VPA security lapses are borderline comical, like when children commandeer them to buy toys with parental credit cards. Other stories feel far more nefarious. For instance, Amazon admits to listening in on Alexa conversations even as Google and Apple face litigation for similar practices.

Third-party hacks are particularly frightening since virtual assistants often have access to personal accounts and may be connected to household controls (lighting, heating, locks, and more).

Outside breaches of AI assistants usually fall into one of the following categories:

  • Eavesdropping: The most basic exploit of an always-on microphone is turning it into a spying device. Constant listening is intended as a feature instead of a bug: devices passively await “wake words,” manufacturers review interactions to improve performance, and some VAs allow remote listening for communication or monitoring purposes. Hijacking this capability would effectively plant an evil ear right in your home. Beyond creepily invading your privacy, such intrusions could capture financial details, password clues, blackmail material, and a way to confirm that a residence is empty.
  • Imposters: Smart speakers connect customers to services via voice commands with little verification, a vulnerability that clever programmers abuse to fool unwitting users. Hackers can employ a method called “voice squatting,” where unwanted apps launch in response to commands that sound like legitimate requests. An alternate approach called “voice masquerading” involves apps pretending to close or connect elsewhere. Rather than obeying requests to shut down or launch alternate apps, corrupt programs feign execution, then collect information intended for others. According to our study, only 15 percent of respondents knew of these possible hacks.
  • Overriding: Several technological tricks can allow outsiders to control AI assistants remotely. Devices rely on ultrasonic frequencies to pair with other electronics, so encoded voice commands transmitted above 20Khz are inaudible to humans but received by compliant smart speakers. These so-called “dolphin attacks” can be broadcast independently or embedded within other audio. Researchers have also triggered assistant actions via light commands – fluctuating lasers interpreted as voices to control devices from afar. Eighty-eight percent of AI assistant owners in our study had never heard of these dastardly tactics.
  • Self-hacks: Researchers recently uncovered a method to turn smart speakers against themselves. Hackers within Bluetooth range can pair with an assistant and use a program to force it to speak audio commands. Since the chatbot is chatting with itself, the instructions are perceived as legitimate and executed – potentially accessing sensitive information or opening doors for an intruder.



Source link

——————————————————–


Click Here For The Original Source.

National Cyber Security

FREE
VIEW