Got story updates? Submit your updates here. ›
Anthropic, the AI company valued at around $380 billion, has developed a new AI model called Claude Mythos that it deems too powerful and dangerous to be released publicly. Instead, the company is rolling out the model through an invitation-only initiative called Project Glasswing, which will provide access to around 40 organizations focused on defensive cybersecurity work. This move comes as Anthropic reportedly prepares for an IPO this year and as it continues to outpace competitors like OpenAI in terms of revenue and model capabilities.
Why it matters
Anthropic’s decision to restrict access to its most advanced AI model highlights the growing concerns around the potential misuse of powerful language models. By taking a cautious approach and partnering with select enterprise customers, Anthropic is positioning itself as a responsible leader in the AI industry and building stronger relationships with key players in the cybersecurity space.
The details
The new Claude Mythos model is said to significantly outperform Anthropic’s previous models, as well as those of its competitors. However, the company has decided that the risks of public release outweigh the potential benefits. Instead, Project Glasswing will provide access to the model to around 40 organizations, including tech giants like Amazon, Apple, Microsoft, and Cisco, to help secure critical software systems from the threats posed by advanced AI.
- Anthropic has been on a growth tear, hitting a $30 billion annual revenue run rate, which is a 58% surge in March alone.
- The company is reportedly preparing for an IPO this year.
The players
Anthropic
An AI company valued at around $380 billion that has developed a powerful new language model called Claude Mythos, which it deems too risky to release publicly.
Dario Amodei
The CEO of Anthropic, who is taking a different approach than his former colleague Sam Altman at OpenAI in terms of restricting access to advanced AI models.
OpenAI
A rival AI lab that once made a similar call to withhold the release of its GPT-2 model over concerns it could be misused to generate convincing fake text.
Paulo Shakarian
A Professor of artificial intelligence at Syracuse University who believes Anthropic’s approach of creating a tightly controlled consortium and working directly with industry partners is a smart brand-building move.
Richard Whaling
The lead researcher of cybersecurity startup Charlemagne Labs, who suggests Anthropic may have resource limitations in addition to safety concerns when it comes to commercializing the powerful Mythos model.
What they’re saying
“I share Anthropic’s concerns around Mythos’ potential misuse, but I think there is also a resource limitation at play. Anthropic has not announced how large Mythos is, but has implied that it is many times larger—and more expensive—than Claude Opus. I think it is likely that they simply do not have the GPU and other compute resources available to serve it at scale.”
— Richard Whaling, Lead researcher, Charlemagne Labs
“By creating a tightly controlled consortium and working directly with industry partners, Anthropic is ‘taking a lead in the industry as to mitigating these new risks.’ It’s an approach that ‘plays really well with the chief security officers of the world.’”
— Paulo Shakarian, Professor of artificial intelligence, Syracuse University
What’s next
Anthropic has said it is already working on safeguards for the Mythos model, and AI models tend to become cheaper and more practical over time. Some customers might also be willing to pay a premium for the model’s advanced capabilities.
The takeaway
Anthropic’s decision to restrict access to its powerful Mythos AI model highlights the growing concerns around the potential misuse of advanced language models. By taking a cautious approach and partnering with select enterprise customers, the company is positioning itself as a responsible leader in the AI industry and building stronger relationships with key players in the cybersecurity space.
Click Here For The Original Source.
