onemanopsBook a call
claudeanthropicsecuritypricing

Anthropic Built an AI It Won't Let You Use - Here's Why

Claude Mythos Preview is Anthropic's most capable AI model to date. It found thousands of previously unknown security vulnerabilities during testing. Anthropic assessed the cybersecurity risk as too high for public relea

April 14, 20263 min readBy AndresUpdated April 14, 2026

Anthropic just did something no major AI company has done before. It built its most powerful model ever - and then refused to release it.

TL;DR: Claude Mythos Preview is Anthropic's most capable AI model to date. It found thousands of previously unknown security vulnerabilities during testing. Anthropic assessed the cybersecurity risk as too high for public release and instead deployed it through Project Glasswing - a restricted program for 40+ companies including Apple, Amazon, Microsoft, and Google, for defensive security work only.

What Actually Happened

On April 7, Anthropic unveiled Claude Mythos Preview. Here's the thing - this isn't another incremental upgrade. During testing, Mythos discovered thousands of zero-day vulnerabilities across major software systems. Zero-days are security holes that nobody knew existed. Not the software companies, not the security teams, not the hackers. Nobody.

So Anthropic looked at what they'd built and made a call: this model is too dangerous to put in everyone's hands.

Instead, they created Project Glasswing - think of it as a velvet rope around the most capable AI on the planet. Over 40 companies got access: Apple, Amazon, Microsoft, Google, NVIDIA, Cisco, CrowdStrike, JPMorganChase, Broadcom, Palo Alto Networks, and the Linux Foundation, among others. Every single one of them is using Mythos for one purpose only - finding and fixing security holes before attackers do. Anthropic has been in direct discussions with CISA about the model's capabilities.

The pricing tells you how serious they are about controlling access: $25 per million input tokens, $125 per million output tokens, with Anthropic committing $100 million in usage credits to cover the research preview period.

Why This Matters to You

Now, here's where this gets interesting for people who aren't running a Fortune 500 security team.

This is the first time a major AI lab has publicly said "we built something too powerful to share." That sets a precedent. The AI tools you use every day - Claude, ChatGPT, Gemini - are the versions these companies decided were safe enough for public release. Mythos is what sits behind the curtain.

And Anthropic isn't alone in this thinking. OpenAI is building its own restricted cybersecurity AI through a program called Trusted Access for Cyber, backed by $10 million in API credits for select businesses. Two major labs, same conclusion: some AI capabilities need a locked door.

What this means practically - future Claude releases will likely reflect capability limits informed by what Mythos testing revealed. The AI you interact with is shaped by what the company decides you shouldn't have access to.

Related posts