ADVERTISEMENT
Advertise with BNC

Anthropic Unveils Claude Mythos and Project Glasswing — The AI Model Too Dangerous to Release Publicly

Anthropic Unveils Claude Mythos and Project Glasswing — The AI Model Too Dangerous to Release Publicly

Anthropic's most powerful model has found thousands of zero-day vulnerabilities in every major operating system and browser. Rather than releasing it publicly, the company is deploying it as a defensive weapon through a coalition of tech giants — raising urgent questions about what happens when AI outpaces the humans tasked with securing the world's software.

Anthropic has taken the unusual step of withholding its most capable AI model from the public, not because it underperforms, but because it works too well. Claude Mythos Preview, announced Tuesday as part of a new industry initiative called Project Glasswing, has demonstrated cybersecurity capabilities so advanced that the company has concluded a general release would pose unacceptable risks.

https://www.anthropic.com/glasswing

The model has already identified thousands of previously unknown zero-day vulnerabilities — including critical flaws in every major operating system and every major web browser — many of which have survived decades of human review and millions of automated security tests. One vulnerability it discovered in OpenBSD, widely regarded as one of the most security-hardened operating systems in the world, had gone undetected for 27 years. Another, in the ubiquitous video-processing library FFmpeg, had been missed despite automated testing tools hitting the relevant line of code five million times.

Rather than making Mythos available through its API, Anthropic has assembled a coalition of 12 launch partners — including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks — to deploy the model exclusively for defensive security work. An additional 40 organisations that build or maintain critical software infrastructure will also receive access. Anthropic is committing up to $100 million in usage credits and $4 million in direct donations to open-source security organisations to support the effort.

A Model That Thinks Like a Security Researcher

What separates Mythos from its predecessors is not just the volume of vulnerabilities it can find, but its ability to operate with minimal human guidance. According to Anthropic, the model identified nearly all the disclosed vulnerabilities — and developed working exploits for many of them — entirely autonomously, without human steering. In one case, it independently discovered and chained together several Linux kernel vulnerabilities to escalate from ordinary user access to full machine control, a feat that typically requires extensive manual expertise.

Logan Graham, who leads Anthropic’s frontier red team, told Axios that Mythos Preview is “extremely autonomous” and possesses reasoning capabilities comparable to an advanced human security researcher. Where its predecessor, Claude Opus 4.6, found approximately 500 zero-days in open-source software, Mythos Preview’s output runs into the tens of thousands.

On the CyberGym vulnerability reproduction benchmark, Mythos scored 83.1% compared to Opus 4.6’s 66.6%. The performance gap extends across coding and reasoning tasks more broadly: on SWE-bench Verified, a measure of real-world software engineering capability, Mythos scored 93.9% against Opus 4.6’s 80.8%.

“AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back,” said Anthony Grieco, SVP and Chief Security and Trust Officer at Cisco. “The old ways of hardening systems are no longer sufficient.”

The Dual-Use Dilemma

The capabilities that make Mythos valuable to defenders are precisely what make it dangerous in the wrong hands. This dual-use tension sits at the heart of the Project Glasswing announcement and represents one of the most concrete examples yet of the cybersecurity arms race that OpenAI’s recently published industrial policy paper warned about.

In that document, released the day before Anthropic’s announcement, OpenAI identified AI-enabled cyberattacks and biological threats as the two most immediate near-term risks from advanced AI. Sam Altman told Axios that top tech, business and government officials fear that soon-to-be-released models could enable a world-shaking cyberattack this year. The Mythos announcement lends considerable weight to that warning.

https://www.anthropic.com/glasswing

The timing is not coincidental. The broader AI industry has been converging on the view that frontier models now pose genuine cybersecurity risks at a scale that demands coordinated action. OpenAI warned in December that its own upcoming models posed a “high” cybersecurity risk. CNN reported that experts see the emergence of AI agents — autonomous systems capable of scanning for and exploiting vulnerabilities far faster than human hackers — as a step-change in the threat landscape.

“If we are crossing the Rubicon where you can functionally automate those capabilities and make them very cheap as well, then we’re in an entirely new world,” Graham told the Washington Examiner.

Real-world evidence already supports this concern. Anthropic has previously disclosed that a Chinese state-sponsored hacking group exploited Claude’s agentic capabilities to infiltrate roughly 30 organisations, including tech companies, financial institutions and government agencies. In February, a hacker used Claude in a series of attacks against Mexican government agencies, stealing sensitive tax and voter information.

The Glasswing Strategy: Defence Before Offence

Anthropic’s approach — restricting the model to vetted partners rather than releasing it publicly — represents a notable departure from the general trend in AI toward broad availability. The company is explicitly betting that giving defenders a head start will create a durable advantage before similar capabilities proliferate through competing models.

“The window between a vulnerability being discovered and being exploited by an adversary has collapsed — what once took months now happens in minutes with AI,” said Elia Zaitsev, CTO of CrowdStrike. “That is not a reason to slow down; it’s a reason to move together, faster.”

Jim Zemlin, CEO of the Linux Foundation, emphasised the implications for open-source software, which underpins the vast majority of modern digital infrastructure. Open-source maintainers — often volunteers working without dedicated security teams — have historically been responsible for software that runs the world’s servers, banking systems and communications networks. Project Glasswing, Zemlin said, offers a path to making AI-augmented security accessible to maintainers who previously could not afford it.

Microsoft, which is simultaneously a partner in Project Glasswing and a major investor in OpenAI, reported that when tested against its CTI-REALM security benchmark, Mythos Preview showed substantial improvements over previous models. Igor Tsyganskiy, EVP of Cybersecurity and Microsoft Research, said the initiative allows the company to “identify and mitigate risk early.”

Questions of Trust and Timing

The announcement does not come without complications. Anthropic’s own security track record has recently been tested: last month, the company accidentally exposed nearly 2,000 source code files and over half a million lines of code through a mistake in its Claude Code software package. The existence of Mythos itself was first revealed through a leak from an unsecured, publicly searchable data cache — an ironic security lapse for a company positioning itself as a cybersecurity leader.

Anthropic has also been navigating a high-stakes dispute with the U.S. Department of Defence over the military’s use of Claude in real-world operations, adding geopolitical complexity to its government engagement around Mythos. The company says it has been briefing CISA, the Commerce Department and other agencies on the model’s capabilities, framing the initiative in national security terms.

The broader question — raised by OpenAI’s industrial policy paper and now given tangible form by Mythos — is whether the institutions, regulations and coordination mechanisms needed to manage AI-driven cybersecurity risk can keep pace with the technology itself. Anthropic has committed to publishing a public report on lessons learned within 90 days. Partners will share information and best practices. Practical recommendations for how vulnerability disclosure, patching, and software development practices should evolve in the AI era are expected to follow.

But as Graham acknowledged, Mythos is not the end of the story. Behind it sits the next OpenAI model, the next Google Gemini, and a few months behind them, open-source Chinese models that could replicate similar capabilities without the guardrails or coalition-based deployment that Project Glasswing represents.

“No one organisation can solve these cybersecurity problems alone,” Anthropic wrote. “The work of defending the world’s cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months.”

The race between AI-powered attack and AI-powered defence is now fully underway. Project Glasswing is Anthropic’s opening bid that the defenders can win. Whether it proves sufficient will depend on how quickly the rest of the industry, and the world’s governments, follow.


Maximize Your 2026 Crypto-Media Reach – Before It’s Too Late!

BNC AdvertisingBrave New Coin reaches 1M+ engaged crypto enthusiasts a month through our website, podcast, newsletters, and YouTube. Get your brand in front of key decision-makers and early adopters in 2026. Limited slots remaining! Find out more today!


ADVERTISEMENT
Advertise with BNC
Recent Posts
ADVERTISEMENT
Advertise with BNC
Top Gainers & Losers
Discover the biggest crypto gainers & losers
ADVERTISEMENT
Advertise with BNC
Latest Insights More Insights
ADVERTISEMENT
Advertise with BNC