Training

AI Systems Engineering — the training your team needs to actually ship.Eight-day intensives. 80% hands-on. Working code every session.

This is the corporate training program we run for development teams who already use AI tools and want to go further. By day eight, your team has shipped a working agentic system on your stack, with observability, evals, and the maintenance playbook to keep it running.

8 days
from "we use Copilot" to "we ship agents"
80%
hands-on every session — concept · demo · build
8
working deliverables your team owns and can deploy
100%
customised to your stack and domain
Use cases

Who this training is built for

IT services dev teams

Teams that already deliver code and want to add agentic-AI capability to their offerings.

Enterprise platform teams

In-house engineering teams asked to deliver the first wave of AI-powered products.

AI-curious senior devs

Experienced engineers who want a structured deep-dive, not another "intro to LLMs" course.

Industries served
IT ServicesEnterprise SoftwareERP / Implementation PartnersProduct Engineering
Technology

Curriculum stack — what trainees ship code against

Reasoning modelsClaude Opus / Sonnet / Haiku · model-selection drillsTool layerCustom MCP servers · Pydantic schemas · auth profilesOrchestrationSupervisor + handoff patterns · sub-agent executionObservabilityLangfuse · cache-hit telemetry · cost attributionDeploymentPython · FastAPI · Docker · the customer's actual stack
Methodology

Delivery methodology

01

Pre-flight

Curriculum customised to your stack and the kind of agents you intend to ship. Reading list shared 1 week ahead.

02

Sessions 1–3

Foundations — agentic vs scripted; first agent loop; tool design; cost-aware prompting; prompt caching.

03

Sessions 4–6

MCP servers from scratch; multi-agent orchestration; eval-set design; observability wiring.

04

Sessions 7–8

Production hardening — guardrails, retries, audit log, deployment. Working agent on your stack by the end.

Security & scalability

Why this program ships engineers who can ship

No beginner content

Trainees are assumed to be productive engineers. We do not teach Python or IDE setup.

On your stack, your data

Customised to your codebase, your tools, your domain — not a generic course built for the average enterprise.

Working code every day

Every session ends with code that runs. By session 8, the agent ships to your staging environment.

Integrations

Tooling we drill on during training

  • Claude API + Anthropic SDK
  • MCP server SDKs (Python + TypeScript)
  • Langfuse traces + cache-hit metrics
  • Pydantic v2 validators
  • GitHub Actions for CI agent workflows
Business impact

Why training pays back the team faster than hiring

Hiring an AI specialist is slow, expensive, and creates a bus factor of one. Training your existing team distributes the capability — and they already know your codebase.

8 days
to working capability across the team
~5×
cheaper than hiring a specialist (per engineer trained)
0
lock-in — the skills transfer to your future projects
Case studies

How recent engagements actually shipped

IT Services · 6 weeks discovery → handoff

PR review pipeline cuts senior-engineer time 4×

Mid-market IT services firm · Ahmedabad · 180 engineers

Problem

Senior engineers were spending 8–12 hours per week each on first-pass PR review across a 6-team monorepo. Junior PRs waited 2+ days for sign-off; velocity stalled; the highest-judgement people were doing the lowest-judgement work.

Solution

A multi-agent CI workflow triggered on every PR open. Three specialist agents run in parallel — a reviewer (Claude Sonnet 4.6) for code-correctness and convention, a security agent for risk patterns, and a test-generator agent for coverage gaps. Outputs are consolidated into a single PR comment within 90 seconds. Humans review the agent's synthesis, not the raw diff.

Claude Sonnet 4.6 (reviewCustom MCP server: GitHub APIGitHub ActionsLangfuse traces
~36 hrs/wk
senior engineer time reclaimed across the team
< 3 days
payback period at loaded-cost rate
review throughput per senior engineer
0
production regressions traced to AI-passed reviews in 90 days
Read the full case study
Workshop / Public Build · 1 day · 8 hours hands-on

The Agentic Operating System — workshop build

AIMED · public workshop · ~40 engineers

Problem

Most teams meeting agentic AI for the first time get stuck on one of three blockers: tool design, orchestration choice, and the gap between a working demo and a system that survives Monday morning. The AIMED workshop format compresses the answers into one day of hands-on building.

Solution

A day-long live build of "the Agentic Operating System" — a multi-agent shell with a supervisor (planning, decomposition), handoff agents (parallel reads, sequenced writes), shared tool registry via MCP, and observability wired in from line one. Every attendee leaves with a running shell on their own laptop, the source, and the patterns to extend it.

Claude Sonnet 4.6 + Haiku 4.5 (free Claude Code tier worked)Three MCP servers built from scratch: filesPython supervisor + handoff context passingLangfuse traces from the first agent call
40
engineers shipped a running multi-agent shell on their own laptops
3
MCP servers per attendee, written from scratch
8 hrs
concept to working artefact
Read the full case study
Frequently asked

AI Systems Engineering Training — questions buyers ask

Brief us on your team

A 30-minute call covers your stack, the kind of agents you want to ship, and how the 8-day intensive maps to your goals. We propose a custom curriculum the same week.