Claude API.Production agents on Anthropic's model line-up.
Claude is the model behind most of the agents we ship: Opus for hardest reasoning, Sonnet for the production workhorse, Haiku for dispatching and routing. The API also gives you the production primitives that turn a model into a system — prompt caching, 1M context, tool-use telemetry, structured outputs.
Picking a model
Haiku 4.5 for routing, classification, and cheap dispatch. Sonnet 4.6 for the production workhorse — most agent reasoning, tool selection, and code work belong here. Opus 4.7 with its 1M context for the hardest steps and whole-codebase reasoning. The cost-per-completed-task is the metric to track, not the headline token price.
Prompt caching is the throughput metric
Stable system prompts, retrieval prefixes, and tool registries are all cacheable. Cache hit rate of 60–80% is achievable on most production workflows and translates to 40–50% token-cost reduction. If you are not measuring cache hit rate per route, you are leaving cost on the table.
Tool-use telemetry
Anthropic now exposes per-call selection scores at the model boundary. You can see why the model picked tool A over B before the call lands in your code. This collapses postmortems — the score deltas tell you whether the registry, the description, or the prompt was the weak link.
Deep dives on Claude API
Tool descriptions are prompts. Fix the registry, not the agent.
When an agent picks the wrong tool, the registry is broken — not the agent. Three rules I now apply before debugging anything in a multi-tool system: precise names, "when to use" triggers, and a curated load list. Anthropic's new tool-selection telemetry finally puts numbers on what changes accuracy.
The cheapest LLM call is the one you do not make — GitHub's 19-62% token cut, decoded
GitHub published an instrumented analysis of their agentic CI workflows and reported 19-62% token-cost reductions. The savings are the headline. The technique — pre-agentic data fetching and tool-registry hygiene — is the story most teams will miss.
Claude Opus 4.7's 1M context: when to RAG and when to just stuff it
A million tokens reliably is real now, but it does not retire RAG — it changes the calculus. Cost, latency, recency, and the prompt-cache angle nobody is talking about.
Prompt caching is not optional anymore — measuring a 47% cost drop
A walkthrough from a client engagement: identifying stable prefixes, restructuring the system prompt for cacheability, and the telemetry that proved caching was actually working.
The agent observability stack we ship to every client
Traces, spans, evals, cost-per-completed-task, and the one dashboard panel that catches 80% of regressions. Vendor-agnostic — covers Langfuse, Honeycomb, and rolling your own.
Three patterns I broke in 2025 — and what I do instead now
Self-correction loops without budgets, single-agent solutions to multi-domain problems, and using JSON mode to force structure I should have built into the schema. An honest review.
Haiku 4.5 made our router 5x cheaper. The trade-off matters
Replacing Sonnet with Haiku in the dispatcher role cut our orchestration cost dramatically. It also cost us in two specific places I did not predict.
Eval datasets: stop testing your agents on the happy path
If your eval set is the demos you showed the client, you are testing the wrong thing. How we build evals from production failures and the minimum viable suite to ship.
I was wrong about JSON mode. Here is what changed my mind
For two years I told teams to avoid forced JSON outputs and use structured tool calls. That was right then and partially wrong now — schema enforcement got better, latency penalties got smaller.
Why your agent keeps failing after 3 steps
The exit condition problem nobody talks about. Most agents are built for the happy path — where every tool call succeeds and the task completes cleanly. Real production agents are different.
RAG vs CAG: how to actually decide
A decision framework from real implementations. RAG retrieves. CAG stores in cache. Knowing which to use — and when to combine both — determines whether your agent finds the right answer at the right cost.
Visual breakdowns on Claude API
Latest in Claude API
Claude Code adds parallel sub-agent execution — multi-file refactors land in a single turn
Claude Opus 4.7 ships with 1M-token context window in production
Claude Code adds project memory — persistent context that survives across CLI sessions
Anthropic publishes "Effective Tool Design" — official guidance for production agents
Sonnet 4.6 update: cheaper tokens, sharper tool calls, fewer retry loops
Haiku 4.5 in production — small-model speed, surprising tool-use chops
Anthropic research: when to use supervisor vs. swarm patterns in multi-agent systems
OpenAI Agent Builder GA — pricing finally competitive for enterprise tool use
Claude API — the questions teams actually ask
Train your team on Claude API
Two tracks — one for developers who build agents, one for business teams who use them. Customised to your stack, hands-on from session 1.
See Claude API training tracksShip your first Claude API system
Architecture design, production implementation on Claude API and MCP, full observability, and a real handoff. Working agents, not slides.
Explore Claude API consulting
