Remember how last week we discussed Anthropic’s Mythos AI model, and how it was reportedly so powerful that it’s been making cybersecurity experts nervous to the point where the company has been choosy about who gets access? Well, this week, a random Discord group managed to get in. I don’t know anyone who didn’t expect this to happen, but it is kind of funny that it happened so quickly.
Members of the Discord group told Bloomberg, which initially broke the story, that they’ve had access since day one, and don’t plan to cause trouble; they just want to kick the tires on it and see how powerful it is. As demand for access grows and Anthropic decides to open the door a little, we’ll have to see what happens, but it’s unlikely everyone who wants in has educational or instructional intentions.
In other AI-related news, Meta announced this week that it’ll let parents see what their children are talking about with the company’s AI chatbots, which is probably a good move, but likely designed to shift responsibility for those conversations to parents and away from their tools.
If you’re a Signal user and were concerned about reports that the FBI was able to extract the contents of deleted Signal messages from an iPhone, worry no longer: Apple has pushed a fix for the loophole that allowed message data to be stored longer than intended. And speaking of Apple-related security news, keep an eye out for emails in your inbox that claim there’s been a change to your Apple account. It’s a phishing scam, designed to harvest your credentials.
Let’s take a look at what else is going on in the infosec world this week.
Dutch Navy Frigate Location Outed by a Bluetooth Tracker
If you want to make sure an airline doesn’t lose your luggage, you might put a Bluetooth tracker inside and monitor your bags from your phone. If you want to make sure you don’t lose your wallet or keys, you might stick a tracker onto the case or put one on your keyring. So why wouldn’t you send a Bluetooth tracker to a warship if you want to keep tabs on its movements? That’s exactly what happened, according to a report by The Register, where a Dutch journalist mailed a postcard with a tracker embedded in it to the HNLMS Evertsen, an air defense frigate on maneuvers escorting the French aircraft carrier Charles de Gaulle. The tracker was active for about 24 hours, showing the ship leaving port on the island of Heraklion, Crete, sailing west along the coast, and then turning east toward Cyprus, before going dark. Dutch officials say the tracker was discovered during onboard mail sorting and either deactivated or destroyed at that point.
On the bright side, the Dutch government makes it very easy to mail letters and packages to soldiers on deployment and provides detailed instructions on its website. The instructions even say that while packages are X-rayed, greeting cards and postcards are not, which is why the journalist chose to embed the tracker in a postcard. So that culture of openness, while admirable, has drawn concern now that someone with such easily accessible tech has revealed an operational security (opsec) hole that’s pretty significant if you’re trying to conceal the location or movements of, say, a warship. As a result, the Dutch government has now banned greeting cards and other mail that contain batteries.
It is a little ironic that the same week Microsoft announced that Windows Defender should be enough protection for most people (and that may be true!), three proof-of-concept exploits just turned up that could change the conversation about Windows’ built-in security tool completely. Dark Reading reports that the exploits turn Defender into an attacking tool. Two of the exploits grant the attacker system-level administrative access to the computer, and the other quietly prevents Defender from updating, leaving the computer exposed to new threats. Of the three exploits, two remain unpatched by Microsoft.
Before we call this an overall win for antivirus providers who still have products to sell, keep in mind that all three vulnerabilities require some initial access to the system to work. That means that no one’s going to just cruise in from the wider internet and take over your PC while you’re browsing the web, but it also means that good internet hygiene is more important than ever. You know the drill: don’t click links or download attachments from senders you don’t know or on sites you don’t trust. Use a password manager to ensure you only autofill your password on legitimate websites, not on phishing sites. Be smart about how you browse the web, keep your protection up to date, whatever you use, and you should be OK.
Recommended by Our Editors
AI Vendors Shrug Off Responsibility for Security
As we’ve discussed, AI cybersecurity tools are very good at testing for and identifying vulnerabilities in virtually any software they’re directed at. It is not, however, very good at remediating those vulnerabilities, and while AI companies love to tout how effective their tools are at finding bugs and problems, they also love to reject responsibility for those bugs, even when they occur in their own tools.
I caught two different pieces at The Register this week on this topic, the first one a report on how Anthropic was notified by researchers of a core flaw in the company’s official Model Context Protocol (MCP), one that puts over 200,000 servers at risk. MCP allows the company’s products to interact with the customer information needed to actually perform what the products promise, and a flaw in that protocol means an attacker can jump in and extract sensitive information from companies using Anthropic products. When researchers raised the issue with Anthropic, the company essentially said, “That’s how it’s supposed to work,” and quietly updated its privacy policy to advise customers to “use it with caution.”
The second piece is a deeper, industry-wide examination of the problem. According to Jessica Lyons, the cybersecurity editor at The Register, who wrote both pieces, Anthropic isn’t the only AI company to have been confronted with severe vulnerabilities or exploitable issues in its products, only to essentially reject responsibility for them. The reaction from those firms seems to echo the same response most people get when they point out things like hallucinations, false statements, privacy issues, and other problems with AI in general: “That’s just how it works, it’s not perfect, we’re always improving, get used to it.”
About Our Expert
Alan Henry
Managing Editor, Security
Experience
I’ve been writing and editing stories for almost two decades that help people use technology and productivity techniques to work better, live better, and protect their privacy and personal data. As managing editor of PCMag’s security team, it’s my responsibility to ensure that our product advice is evidence-based, lab-tested, and serves our readers.
I’ve been a technology journalist for close to 20 years, and I got my start freelancing here at PCMag before beginning a career that would lead me to become editor-in-chief of Lifehacker, a senior editor at The New York Times, and director of special projects at WIRED. I’m back at PCMag to lead our security team and renew my commitment to service journalism. I’m the author of Seen, Heard, and Paid: The New Work Rules for the Marginalized, a career and productivity book to help people of marginalized groups succeed in the workplace.
Click Here For The Original Source.
