Published: 2026-05-10

Claude Managed Agents Add Dreaming, Outcomes, and Multi-Agent Orchestration

Anthropic announced four major additions to Claude managed agents: Dreaming, Outcomes, multi-agent orchestration, and webhooks. Each works independently, but together they shift Claude from a session-bounded tool to a persistent, self-improving agent system. Harvey Legal saw task completion rates jump roughly 6x after enabling Dreaming in internal testing. Outcomes improved output quality by 8.4% for documents and 10.1% for presentations in Anthropic's internal benchmarks.

Source video

"NEW Claude Update is a GAME CHANGER!" by Julian Goldie SEOWatch on YouTube →

Key Takeaways

  • Dreaming: a scheduled background process that reviews past sessions, extracts patterns, and restructures memory — agents improve without retraining or prompt changes.
  • Outcomes: write a quality rubric once; a separate grader agent evaluates every output against it and auto-retries when quality falls short — up to 10 percentage points improvement on hard tasks.
  • Multi-agent orchestration: a lead agent breaks complex jobs into pieces and delegates each to a specialist with its own model, prompt, and tools, all running in parallel on a shared file system.
  • Webhooks connect agent completion events to external tools (CRM, email platforms, project management) — no copy-pasting, no manual handoffs.
  • Harvey Legal saw ~6x task completion rate improvement from Dreaming; Outcomes yielded 8.4% better word documents and 10.1% better presentations in internal tests.
  • Dreaming is still in research preview and requires access request; Outcomes and webhooks are available now.
  • You stay in control of Dreaming: choose automatic memory updates or review changes before they go live.

Dreaming: How Agent Memory Improves Between Sessions

The name is intentional. Your brain collects information during the day and consolidates it during sleep — keeping what matters, dropping what doesn't, and building patterns for future use. Claude managed agents now do the same. Dreaming is a scheduled process that runs between sessions, reviewing past tool calls, decisions, and outcomes. It finds successful workflows and recurring mistakes, extracts patterns, and restructures the agent's memory stores so they stay useful as they grow.

For a business using an agent to handle customer inquiries: on day one, it's functional. By week two, it has spotted the 10 most common issues, learned the best responses, and remembered which approaches got the best results. This happens automatically. Harvey Legal, which uses managed agents for long-form legal drafting and document creation, saw their agents remember file type workarounds and tool-specific patterns across sessions — completion rates roughly 6x higher than before.

Outcomes: Self-Grading at Scale

Most AI outputs require a human to check quality before they're usable. Outcomes eliminates this bottleneck. Developers write a rubric — a set of criteria describing what "good" looks like for a specific task. A dedicated grader agent, running in its own context window, evaluates every output against that rubric independently. If the output falls short, the grader sends specific feedback and the agent takes another pass automatically. This continues until the output passes or a retry limit is hit.

The grader runs independently to avoid the agent evaluating its own work in the same context — a known failure mode where the agent defends its output rather than genuinely assessing it. Practical example: a content agent writing posts and emails could have a rubric requiring a specific tone, one clear action step, under 150 words, and no jargon. The grader catches drafts that miss any criterion before they ever reach a human reviewer.

Multi-Agent Orchestration and Webhooks

Until now, Claude handled complex tasks sequentially — one agent doing everything in order. Multi-agent orchestration introduces a lead agent that acts as a project manager: it takes a brief, breaks it into parts, and delegates each part to a specialist with its own model, prompt, and tools. Specialists work in parallel on a shared file system, contributing results back to the lead agent's context as they complete. Complex jobs that previously stalled or lost coherence over long sequences now run faster and with better output.

Webhooks close the loop with external systems. When an agent finishes a task, a webhook can fire automatically — updating a CRM, triggering an email sequence, creating a project management task, or any other integration. The agent doesn't just produce output; it completes the workflow. Together, these four features form a system: orchestration runs jobs in parallel, Outcomes grades every output, Dreaming improves memory over time, and webhooks connect everything to your existing tools.

Related on OpenClawDatabase

← Back to News digest · See also: Claude Cowork guide

📬 Weekly Digest — In Your Inbox

One email a week: top news, releases, and our deepest new guide. No spam. Same content via RSS if you prefer.