Anthropic Unveils 'Dreaming' AI That Learns From Its Own Mistakes at Scale
Anthropic's new 'dreaming' system lets AI agents learn from past sessions to improve accuracy. Early adopters see major gains. CEO reveals 80x growth.
Anthropic has launched a groundbreaking new capability called "dreaming" that allows AI agents to autonomously learn from their own past sessions and continuously improve. The announcement came Tuesday at the company's second annual Code with Claude developer conference in San Francisco.
The system moves beyond simple memory by analyzing patterns across thousands of agent interactions, extracting recurring mistakes and optimal workflows. Early adopters report dramatic improvements: legal AI firm Harvey saw a sixfold increase in task completion rates after implementing the feature.
How Dreaming Works
Dreaming is a scheduled background process that reviews an agent's session history and memory stores, curating insights that no single session could produce. This allows agents to surface shared preferences, common error patterns, and cross-agent workflow optimizations.

"Dreaming is analogous to how people within an organization reflect on shared experiences to improve collectively," said Alex Albert, Anthropic's head of research product management, in an interview at the conference.
Two More Features Enter Public Beta
Alongside dreaming, Anthropic moved two experimental features — outcomes and multi-agent orchestration — from research preview into public beta. Outcomes provides detailed post-task analytics, while multi-agent orchestration enables multiple AI agents to collaborate on complex workflows.
Medical document review company Wisedocs cut its document review time by 50% using outcomes. Netflix now processes logs from hundreds of simultaneous builds using the multi-agent orchestration feature.
Background: Explosive Growth Strains Infrastructure
Anthropic CEO Dario Amodei revealed during a fireside chat that the company is experiencing unprecedented growth, far exceeding its own aggressive projections. In the first quarter of 2026, revenue and usage grew at an annualized rate of 80x, compared to the 10x growth the company had planned for.
"We tried to plan very well for a world of 10x growth per year," Amodei said. "And yet we saw 80x. And so that is the reason we have had difficulties with compute."
API volume on the Claude platform is up nearly 70x year over year, and the average developer using Claude Code now spends 20 hours per week with the tool. The company says these three new features address the hardest problems in running AI agents at scale: accuracy, learning, and preventing bottlenecks on complex tasks.
What This Means for Enterprise AI
The dreaming capability represents a significant step toward self-correcting, self-improving AI systems that enterprises have long demanded before trusting agents with production workloads. By enabling agents to learn from their own mistakes without human intervention, Anthropic is reducing the need for constant manual oversight.
For developers, the combination of dreaming, outcomes analytics, and multi-agent orchestration creates a robust framework for deploying reliable AI agents at scale. The company expects these tools to accelerate adoption in industries like legal, healthcare, and logistics where accuracy and continuous improvement are critical.
Industry analysts note that while other AI companies offer agent memory, Anthropic's dreaming operates at a higher level of abstraction, extracting cross-session patterns that enable genuine learning. This could give Anthropic a competitive edge as enterprises seek AI systems that require less hand-holding and deliver consistent, improving performance over time.