5 Wackiest Cybersecurity Stories of 2023 | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware

[ad_1]

The world of information security covers a range of topics, and in such a rapidly evolving field, we sometimes come across unique, unusual and even downright whacky stories. These include bizarre attack methods and cybercriminals getting their comeuppance.

This year was no exception to this rule, and in this article Infosecurity Magazine sets out our top 5 wackiest cyber stories of 2023.

1. Hacking into Pets Eating Habits

Security risks posed by the proliferation of home internet of things (IoT) devices have been a major source of concern for several years now, leading to new legislation forcing cybersecurity requirements on manufacturers.

In June 2023, it was revealed that even our pets’ meals could provide a gateway for malicious cyber actors. Researchers from Kaspersky discovered two security flaws in popular smart pet feeders that could lead to data theft and privacy invasion.

The first of these relates to certain smart pet feeders using hard-coded credentials for Message Queuing Telemetry Transport (MQTT), which could allow hackers to execute unauthorized code and gain control of one feeder to launch subsequent attacks on other network devices. They could also tamper with the feeding schedules, potentially endangering the pet’s health.

The other vulnerability is related to an insecure firmware update process, leading to unauthorized code execution, modification of device settings and the theft of sensitive information, including live video feeds sent to the cloud server.

2. BlackCat Gang Taking Incident Reporting Rules Seriously

Ensuring more transparency around cyber-incidents is a key aim of new US Securities and Exchange Commission (SEC) rules, which mandate that publicly listed firms operating in the US must disclose “material” cyber incidents within four days.

However, it is unlikely that the SEC envisioned it would be receiving reports of incidents from the attackers themselves. This is what happened in November 2023, when the BlackCat/ALPHV group revealed it had posted details of its compromise of MeridianLink to the SEC’s “Tips, Complaints, and Referrals” site.

The move appears to be a new way of pressuring victims into paying ransom demands, with the SEC empowered to issue severe penalties for non-compliance with its reporting obligations. No other instances have occurred since November but it will be one to watch in 2023.

3. Cybercriminals Reluctant to Use ChatGPT

Since OpenAI’s launch of its AI-based chatbot ChatGPT in November 2022, there has been lots of discussion about how large language models (LLMs) could be used by cyber threat actors to enhance attacks.

However, research published by Sophos in November 2023 suggested that many threat actors are reluctant to use these tools, even expressing concerns about the wider societal risks they pose. Analyzing several prominent cybercrime forums, the researchers observed that many of the attempts to create malware or attack tools using LLMs were “rudimentary” and often met with skepticism by other users.

Many cybercriminals expressed fears that the creators of ChatGPT imitators were trying to scam them.

4. Google Launches Legal Action Against Scammers

In November 2023, tech giant Google revealed it is pursuing a novel strategy to deter cybercriminals – litigation.

The firm said it is taking legal action against two groups of scammers. The first lawsuit is targeting malicious actors who misled people into unknowingly downloading malware by spoofing Google’s AI tools. Google is seeking an order to stop the scammers from setting up domains like these and allow them to have them disabled with US domain registrars.

The second lawsuit targets the abuse of copyright law by bad actors, with Google highlighting the practice of setting up dozens of Google accounts and using them to submit thousands of bogus copyright claims against their competitors. These claims result in the temporary removal of businesses’ websites, costing victims millions of dollars. Google hope their action will put an end to this activity and deter others.

5. Researchers Find “Silly” Way to Extract ChatGPT Training Data

A team of researchers from Google and several US universities discovered an attack method targeting ChatGPT in November 2023 that they described as “kind of silly.” This unusual technique can extract around a gigabyte of ChatGPT’s training dataset from the model.

The researchers prompted the model with the command to repeat a certain word, e.g. ‘poem’ forever, and sat back and watched as the model responded.

ChatGPT would repeat the word for a while and start including parts of the exact data it had been trained on including email addresses and phone numbers. In the strongest configuration, over 5% of the output ChatGPT emitted was a direct verbatim 50-token-in-a-row copy from its training dataset.

While LLMs should generate responses based on the training data, this training data itself is not meant to be made public. The researchers revealed they spent roughly $200 to extract several megabytes of training data using their method but believe they could have got approximately a gigabyte by spending more money.

[ad_2]

——————————————————-


Click Here For The Original Source.

National Cyber Security

FREE
VIEW