Amodei heads to White House over Anthropic Mythos dispute

Anthropic CEO Dario Amodei is scheduled to meet White House Chief of Staff Susie Wiles this Friday as the company works to resolve a months-long dispute with the Pentagon over Anthropic Mythos, its most advanced AI model. The meeting represents the most direct attempt yet to end a standoff that has left the US government simultaneously locked out of and actively seeking access to a technology its own agencies have flagged as a national security priority.

Mythos, announced on 7 April, proved during testing to be capable of identifying and exploiting thousands of previously unknown zero-day vulnerabilities across every major operating system and web browser, including flaws that had survived decades of human security review and millions of automated scans. Multiple US agencies, among them the Treasury Department, the intelligence community, and the Cybersecurity and Infrastructure Security Agency, are now seeking access.

The Pentagon dispute behind the Anthropic Mythos standoff

The conflict began in February, when Defense Secretary Pete Hegseth demanded that Anthropic grant the Pentagon unrestricted access to its AI models for all lawful purposes, including potential use in autonomous weapons systems and domestic surveillance. Amodei refused. Hegseth responded by designating Anthropic a national security supply-chain risk, a label previously reserved for companies linked to foreign adversaries, which barred Anthropic from all Defense Department contracts.

Anthropic filed two federal lawsuits in early March alleging illegal retaliation. A federal judge initially blocked the blacklisting, but an appeals court reversed that ruling on 8 April. The company has been excluded from Pentagon contracts since, though it can still work with other government agencies.

The position both sides now occupy is difficult to ignore. The Treasury Department is seeking Mythos to scan its own systems for vulnerabilities. The White House Office of Management and Budget is building a framework to allow federal agencies to use a controlled version. According to Axios, Anthropic has hired Trumpworld consultants to facilitate negotiations, and Friday’s meeting is designed to lay the groundwork for a formal agreement.

What Anthropic Mythos can do

Anthropic Mythos is not a purpose-built cybersecurity tool. It is a general-purpose AI model that turned out to have cybersecurity capabilities during testing. When directed to develop working exploits, it succeeded on the first attempt in more than 83% of cases. It completed a 32-step corporate network attack simulation from start to finish, a milestone no prior AI model had reached. The UK’s AI Security Institute evaluated a preview version and found it “substantially more capable at cyber offence than any model previously assessed,” and the first capable of chaining multiple attack steps into complete end-to-end intrusions.

Rather than release Mythos publicly, Anthropic created Project Glasswing, a controlled access programme that provides the model to roughly 40 vetted organisations. Participants include Amazon Web Services, Apple, Google, Microsoft, Nvidia, Cisco, CrowdStrike, JPMorgan Chase, and Palo Alto Networks. The goal is to let those organisations find and fix vulnerabilities in critical software before attackers can exploit them. Anthropic committed up to $100 million in Mythos usage credits and $4 million in donations to open-source security projects to support that work.

JPMorgan Chase CEO Jamie Dimon said publicly that Mythos “reveals a lot more vulnerabilities” for cyberattacks. The Council on Foreign Relations called it “an inflection point for AI and global security.”

The global response to Anthropic Mythos

Concern about Anthropic Mythos extends well beyond Washington. Bank of England Governor Andrew Bailey named it as a cybersecurity risk in a speech at Columbia University on 15 April. The Bank’s Cross Market Operational Resilience Group is convening an emergency briefing within the fortnight with the CEOs of the UK’s eight largest banks, four financial infrastructure providers, two insurers, and representatives from the Treasury, the Financial Conduct Authority, and the National Cyber Security Centre.

Anthropic is preparing to extend Project Glasswing to select British banks within days and is expanding its London office to 800 staff in King’s Cross. The UK’s AI Security Institute, which has an existing evaluation partnership with Anthropic, published its full technical assessment on 17 April. Canadian Finance Minister François-Philippe Champagne described Mythos as an “unknown unknown” currently being discussed at IMF meetings.

The international dimension adds pressure to the White House talks. If the dispute with the Pentagon remains unresolved, US allies could gain access to Anthropic Mythos through Project Glasswing before American government agencies do. That outcome gives the White House an incentive to reach a settlement that goes beyond the original disagreement over safety guardrails.

What a deal could look like

The broad shape of a resolution is visible from both sides’ stated positions. Anthropic Mythos access would be restored for defensive cybersecurity purposes under a formal government agreement. The Pentagon would withdraw the supply-chain risk designation. Anthropic would keep its restrictions on autonomous weapons and mass surveillance but could agree to a review process for specific military use cases that fall outside those lines.

Anthropic has commercial leverage in the negotiation. Its annualised revenue has reached $30 billion, it has attracted investor interest at an $800 billion valuation, and the company is exploring an IPO. It does not need Defense Department contracts to remain financially viable. Its goal is a resolution that lets it work with the broader US government without abandoning the safety principles that generated the dispute in the first place.

Whether Friday’s meeting produces a deal or only opens a longer process, the sequence of events illustrates where AI governance currently stands. Anthropic built a general-purpose AI model, found during testing that it had unprecedented cybersecurity capabilities, restricted its release on safety grounds, was penalised by the government for those same restrictions, and is now being courted by that government because the technology is too consequential to stay at arm’s length.