Issue #47 April 12, 2026 5 min read

This week: Anthropic hits $30B ARR, a model deemed "too dangerous to release," and the open-source benchmark takeover.

The AI newsletter for practitioners who actually ship.

Top Story

Anthropic × Mythos: The Model They Won't Let You Touch

Anthropic built something so capable they decided the normal launch playbook didn't apply — and the industry can't stop talking about it.

Anthropic's new model, Mythos, has done something remarkable: become the week's biggest story without anyone outside a 40-person partner list actually using it. In a highly unusual move, Anthropic determined Mythos was "too powerful to release in the normal way."

Access is currently restricted to a small cohort of approximately 40 partners, with usage constrained strictly to cybersecurity-focused engagements. That's it. No developer preview. No waitlist. No leaked weights.

The intrigue surrounding a model deemed too risky for general release sent the AI community into a frenzy — overshadowing what would otherwise be massive milestones, including Z.AI's open-source GLM 5.1 release and Anthropic's own landmark revenue figures. Notably, OpenAI announced a similarly staggered rollout of their own new model for parallel cybersecurity reasons, suggesting an industry-wide moment of caution has arrived.

The most powerful AI model of the week is one nobody gets to run. That's a genuinely new kind of announcement.

$30B Anthropic ARR
3.5 GW New Compute Capacity
754B GLM 5.1 Parameters

Revenue & Infrastructure

💰 Anthropic Hits $30B ARR — and Signs a 3.5 Gigawatt Compute Deal

The battle for revenue dominance has a new frontrunner. Anthropic has officially hit a $30 billion annualized revenue run rate, a 3× increase since year-end — driven almost entirely by enterprise customers. The number suggests Anthropic has now surpassed OpenAI in annualized run rate.

To sustain the growth (which has been compared internally to Nvidia's historic surges), Anthropic signed a massive compute expansion deal with Google and Broadcom. The deal brings 3.5 gigawatts of capacity online starting in 2027, leaning heavily on Google TPUs for inference while AWS handles training.

The arms race price tag is eye-watering: OpenAI expects to spend $30 billion on model training this year alone. Anthropic projects its own training costs will hit $28 billion by 2028.

Model Releases

🦈 Meta's Muse Spark, Z.AI's Open-Source Giant & Google Goes Local

Beyond the Mythos blackout, developers had plenty of new models to stress-test this week:

  • Meta Muse Spark
    From Meta's new Super Intelligence Lab — a natively multimodal reasoning model built for personal agents, not enterprise workloads. It ships with three modes: instant (zero reasoning), thinking, and contemplating (deep multi-step research). Visual understanding is a standout.
  • Z.AI GLM 5.1 — Open-Source Benchmark King
    754 billion parameters. A 58.4 on SWE-bench Pro, beating GPT 5.4 and Opus 4.6. The demo that got everyone talking: GLM 5.1 autonomously spent 8 hours building a Linux desktop without human intervention. Long-horizon tasks are no longer a closed-model advantage.
  • Google AI Edge Eloquent
    A live AI dictation app running entirely on-device via the Gemma 4 small language model. No cloud. No latency. A quiet signal that capable, offline mobile agents are closer than the hype cycle suggests.

New Tools

🛠️ Claude Managed Agents, Excalidraw Integration & Google Notebooks

The shift from "AI assistant" to "autonomous agent" is no longer theoretical. The tooling is shipping.

  • Claude Managed Agents
    Anthropic launched a full infrastructure package — agent harness, memory systems, sandboxed environment — letting developers go from prototype to production in days. Already demoed automating client onboarding inside Notion. Agents run autonomously for hours without babysitting.
  • Claude + Excalidraw
    A new open-source integration lets Claude Code generate professional architecture and workflow diagrams in Excalidraw via plain English. Claude reads the live canvas, iterates on layout, fixes overflowing text, adjusts color schemes on request. Diagramming as a conversation.
  • Google Gemini Notebooks
    Google retired the confusing "Gems" feature and replaced it with Notebooks — effectively pulling NotebookLM's resource management directly into the Gemini app. Personal knowledge base, finally native.

🏆 Watercooler Moment of the Week

"Token Maxing" Is Meta's New Status Symbol

How does Meta measure an engineer's productivity in the age of AI? Simple: token consumption.

Meta employees created an internal leaderboard called "Claudonomics" to track who burns through the most Claude tokens. Engineers are spinning up massive numbers of parallel agents purely to climb the ranks, earning titles that would look absurd on a business card.

  • 01 Session Immortal (∞ tokens)
  • 02 Token Legend (100M+)
  • 03 Parallel Overlord (50M+)
  • 04 Context Nomad (10M+)

Leadership actively encourages the compute spend, treating maximum token consumption as a direct proxy for 10× efficiency. Whether that's visionary or absurd depends entirely on whether the outputs justify the inference bills. We'll find out when the quarterly numbers land.