Enterprise EngagementERP / Enterprise Software Feb 20268 weeks discovery → handoff

ERP support triage agent eliminates the Level-1 backlog

Supervisor-pattern agent integrating Odoo with customer-facing email + chat.

Odoo-based ERP partner · Gujarat · ~60 implementation consultants

Enterprise AI AutomationOdoo + AI IntegrationsAgentic AI Implementations
340 → 18
open L1 backlog within 6 weeks of go-live
~60%
L1 staffing reduction on agent-eligible categories
$2.30
average cost per agent-resolved ticket
8 wks
engagement, discovery to handoff
Business problem

What the team was actually solving

Customer support backlog had grown to ~340 open tickets. Level-1 triage took 12–20 minutes per ticket on average, and 35% of tickets were misrouted on first pass — every misroute became a customer-facing escalation churn.

Existing workflow

Where the old process broke

  • 1Manual ticket classification: 12–20 minutes per ticket across an overwhelmed L1 team
  • 235% first-pass misrouting causing avoidable escalation cycles
  • 3Inconsistent first responses depending on which consultant picked up the ticket
  • 4No structured handoff brief — escalations arrived without context to the senior consultant
Proposed solution

The AI / technical solution we shipped

A supervisor-pattern agent that ingests email and form submissions, classifies the issue, queries the customer's Odoo instance for context (open invoices, recent modules, last login, current contracts), drafts a Level-1 response with the right module screenshots inline, and routes complex tickets to the right consultant with a pre-filled handoff brief.

System architecture

How the system is wired

Triage flow
Inboundemail · form · chatClassifierHaiku · categoryERP contextOdoo MCPReasonerSonnet · draftActionreply · route · escalate
Technology

Technology stack

Reasoning modelsClaude Sonnet 4.6 (drafting) · Haiku 4.5 (classification + routing)Tool layerCustom MCP server: Odoo (read-only customer / order / invoice scope)OrchestrationSupervisor pattern · classify → contextualise → draft → routeValidationPydantic schemas · brand-voice regex · escalation-threshold logicAudit + observabilityLangfuse traces · per-ticket cost · operator override log
Integration

Integration approach

The agent runs as a Python service alongside the existing helpdesk. Inbound channels (Zendesk / email / form submissions) all funnel through one normalising webhook that calls the agent. Outbound — drafts and ticket updates — goes back through the helpdesk's API so the existing audit trail captures everything.

  • Inbound normaliser: Zendesk + email + form submissions to a single event shape
  • Odoo MCP server: read-only customer / order / invoice / contract tools
  • Helpdesk API: drafts written as agent-attributed comments; humans approve before send
  • Operator dashboard: override + audit log in the team's existing helpdesk UI
Security & scalability

Security & scalability

Read-only ERP scope

The Odoo MCP server cannot create, update, or delete records. Every mutating action goes through a human-approved helpdesk reply path.

Customer data minimisation

Customer PII only reaches the model where it is required for the response (e.g., addressing the customer by name). Account numbers and invoice details are referenced by ID, not echoed in prompts.

Operator override

Every agent draft is reviewable. Operators can override, edit, or block any reply. Overrides feed back into the eval set as a regression case.

Tenant-scoped tokens

For the partner's multi-tenant ERP customer base, every Odoo call is scoped to the specific customer's tenant.

Methodology

Delivery process

01

Discovery (2 wks)

120 historic tickets sampled across categories and customer tiers. Misrouting patterns identified; eval-set seed built.

02

Architecture (1 wk)

Supervisor pattern over swarm — explicit escalation rules, with a small operator dashboard for override.

03

Odoo MCP server (2 wks)

Read-only customer / order / invoice / contract tools. Scoped per-tenant. Eval cases against synthetic instances.

04

Agent + dashboard (2 wks)

Supervisor agent + classifier + drafting logic. Override + audit dashboard in helpdesk UI.

05

Parallel-run (1 wk)

Agent + human run side-by-side on live tickets. Outputs compared, eval set extended, accuracy thresholds tuned.

Deployment

Deployment

Python service in the partner's existing container platform. Stateless behind a webhook entry; postgres for the agent's own decision log; redis cache for tenant-scoped tokens. Eval suite runs in CI and can promote/block model upgrades.

  • Containerised Python service (existing infra)
  • PostgreSQL: decision log + eval results
  • Redis: tenant-scoped Odoo tokens (TTL 1 hr)
  • CI: eval suite gate on prompt/model changes
Observability

Observability

Every ticket touched by the agent has a Langfuse trace with the ticket ID as session key. Operators can click "show trace" from the dashboard to see the agent's reasoning, the tools called, and the cost.

  • Per-ticket Langfuse trace, session-keyed to the ticket ID
  • Cost per ticket reported in the operator dashboard
  • Override rate tracked per category as the leading drift indicator
  • Weekly eval-suite refresh from overridden tickets
Before vs after

Before vs after

Before
  • L1 triage time per ticket12–20 min
  • First-pass misrouting35%
  • Open L1 backlog340 tickets
  • Cost per L1 ticketLabour-dominated
After
  • L1 triage time per ticket< 4 min (agent)
  • First-pass misrouting11%
  • Open L1 backlog18 tickets (6 wks)
  • Cost per L1 ticket$2.30 (agent-handled)
Automation impact

Automation impact

72% of Level-1 tickets are now resolved without human touch within four minutes of arrival. The escalations that do reach senior consultants arrive with a pre-filled handoff brief — module, customer context, attempted L1 response — so they start their work already 80% of the way through what was previously the first ten minutes.

72%
of L1 tickets resolved without human touch
4 min
average time-to-first-response (agent)
11%
first-pass misrouting (down from 35%)
Business outcomes

Business outcomes

The partner reduced L1 staffing requirements by ~60% on the team handling agent-eligible categories and redirected those consultants to implementation work (their highest-margin offering).

340 → 18
open L1 backlog within 6 weeks of go-live
~60%
L1 staffing reduction on agent-eligible categories
$2.30
average cost per agent-resolved ticket
8 wks
engagement, discovery to handoff
Lessons learned

What we'd tell another team building this

  • 01Customer-tier classification matters more than ticket-type classification. The same "invoice question" is a different ticket from a high-tier customer than a low-tier one — that nuance was missed in v1 and added in v2.
  • 02The handoff brief is the most-loved feature with senior consultants. They will defend the agent because the briefs save them ten minutes per escalation.
  • 03Eval cases sampled from real overridden tickets beat synthetic adversarial cases for tuning. Drift shows up in real data first.
What's next

Future scalability

The same Odoo MCP server is now feeding renewals workflows, onboarding agents, and proactive customer health scoring. Once the canonical MCP server existed, each new agent dropped from 8 weeks to ~2 weeks of engineering.

  • Renewals follow-up agent reusing the same Odoo MCP
  • Customer-onboarding agent: contract → setup checklist → kickoff scheduling
  • Customer-health scoring agent: weekly synthesis from Odoo signals
  • Cross-agent eval registry shared with the partner's in-house team

Considering an ERP-integrated agent?

A scoping call covers the ticket / workflow you want to automate first, the ERP scopes you are comfortable exposing, and the expected throughput / cost envelope.