Asurion Innovation with AI¶
How AI is embedded in Asurion-specific workflows -- not as a generic chatbot, but as plumbing that understands claims, devices, NPS impact, and cost avoidance. Every claim below is verifiable against the repo.
The Asurion problem¶
Asurion processes 10M+ annual claims across 5+ product lines with a distributed repair network (uBreakiFix, authorized service centers, mail-in). Today's decisions are siloed:
- An agent on the phone cannot see device health signals (battery degradation, damage detection) alongside repair availability and inventory.
- The default action is ship a replacement ($250) even when a same-day repair ($40) was available at a nearby store with parts in stock.
- NPS-impacting SLA misses occur because capacity constraints (repair queue depth, parts availability) are not visible at decision time.
- Fraud signals from one channel never reach decision-makers in another.
Result: Every unnecessary replacement costs ~$210 in margin. Multiply by millions of claims and the business case writes itself.
How AI is embedded (not bolted on)¶
1. Catalog-anchored SQL generation¶
The SQL generator does not do free-text NL-to-SQL against an unknown schema. It is anchored to:
metrics_catalog(Postgres table,backend/app/metrics/seed.py) — 10 seeded metrics withname,definition,formula,source_table,default_filter. Every SQL query starts from a registered metric row.ai_query_guidelines.md(data-dictionary/ai_query_guidelines.md) — human-curated Asurion-specific SQL rules: preferclaim_idfor volume counts, use the join map for cross-table queries, respect data-freshness windows.- Data dictionary (
data-dictionary/— 4 CSVs covering 52 tables, 1663 columns, 138 joins, 42 KPIs from Asurion's EDW schema) — stuffed into the Bedrock prompt context (ADR-PROTO-004). - sqlglot safety layer (
backend/app/sql_gen/safety.py) — structural validation: SELECT-only, table allowlist, LIMIT injection, forbidden-construct blocking. 18 tests, 25 adversarial SQL cases. The safety layer is rules-based, not prompt-based — it catches what the model misses.
Why this is Asurion-specific: Generic NL-to-SQL hallucinates joins and invents column names. This system can only query tables and columns that exist in Asurion's registered dictionary, against metrics that Asurion's data team has defined. The model fills in dimensions and filters on a templated query shape; it does not invent the query shape.
ADR: ADR-PROTO-002 (anchored SQL gen).
2. LangGraph Clarifier with Asurion metric catalog¶
The Widget Builder Clarifier is a LangGraph state machine (backend/app/widgets/graph.py) with 8 nodes. It is not a generic chatbot — it is domain-aware:
- Metric matching (
backend/app/widgets/nodes/metric_matcher.py): ThemetricMatchernode resolves the operator's natural-language description againstmetrics_catalogbefore asking any questions. If the match confidence is high (e.g., "cost avoided" matchescost_avoided_mtd), the metric gap is auto-closed — no question needed. This only works because the catalog contains Asurion-specific definitions. - Gap detection (
backend/app/widgets/nodes/gap_detector.py): Checks for missing fields specific to Asurion widgets — metric definition, dimensions, filters, chart type, refresh interval. Not generic "tell me more" questions. - Catalog-aware questions (
backend/app/widgets/nodes/question_prioritizer.py): When questions are needed, they surface Asurion's actual metrics with their definitions as single-select options. An operator sees "Active issues — Open and in-progress customer issue sessions across all regions" not "What metric do you want?"
Why this is Asurion-specific: A generic chatbot would ask "What do you want to see?" The Clarifier asks "Which of these 10 registered Asurion metrics?" and auto-resolves when possible.
ADR: ADR-005 (Widget Builder via LangGraph Clarifier), ADR-007 (first-class metric catalog).
3. Two-tier Bedrock model policy¶
Not a generic "call Claude" integration. Each Bedrock call site uses a model tier tuned to the workload (ADR-011):
| Tier | Model | Env Var | Why This Tier |
|---|---|---|---|
| Fast | Sonnet 4.6 | BEDROCK_MODEL_ID |
Intent extraction + SQL gen: flat tool-input schema, downstream safety catches errors, latency budget <3s |
| Reasoning | Opus 4.7 | BEDROCK_REASONING_MODEL_ID |
Spec synthesis + custom codegen: complex nested output, creative composition, quality matters more than speed |
Why this is Asurion-specific: The tier assignment was determined by probing real Asurion workloads (see docs/lessons-learned.md — "Reasoning-tier latency busts the SQL-gen budget"). SQL gen needs the fast tier because the sqlglot safety layer is the actual correctness guarantee. Spec synthesis needs the reasoning tier because widget specs for Asurion's ops views require creative composition of metric definitions, chart types, and data intents.
4. Decision support: AI narrates, never decides¶
The decisioning pipeline (backend/app/decisions.py) uses deterministic rules + weighted scoring to rank actions (repair vs replace vs self-service). The AI's role is narrowly scoped:
- Rules decide: Coverage eligibility, battery health, nearest store slot availability, parts inventory — each produces a reason code.
- AI explains: Bedrock takes the reason codes and generates a natural-language rationale (
backend/app/rationale.py:10—templated_rationale(action_label, reason_codes)). - Fallback guaranteed: If Bedrock is unavailable, the template fallback produces a human-readable rationale from reason codes alone (ADR-003). The demo never breaks.
Why this is Asurion-specific: The reason codes are Asurion domain terms: coverage_valid, battery_health_drop_detected, nearest_store_slot_available, inventory_tight. The LLM's job is to weave these into a sentence an ops executive can read in 2 seconds — not to invent reasons or make decisions.
5. Custom widget codegen for Asurion ops views¶
When operators need views that don't fit the standard KPI/chart/table templates (e.g., an alerts feed with severity coloring, a device timeline with signal annotations), the Clarifier's custom path generates TSX code:
- Two-stage synthesis (ADR-007): Bedrock emits a flat
ComponentSpec(TSX source + props); Python deterministically wraps it with the resolved metric block, data intent, and mock data. No user-authored type signatures — examples replace type definitions everywhere. - Sealed scope (
frontend/src/widgets/CustomWidgetRenderer.tsx):@babel/standalonecompiles the TSX at render time. OnlyReactandIconglobals are injected — no imports, no network access, no DOM manipulation. - Asurion-specific mock data: The mock data is derived from the operator's examples (e.g., "show claim volume by product") and shaped to match Asurion's data model (claim IDs, product names, store locations).
ADR: ADR-006 (WidgetSpec custom variant).
What this is NOT¶
| Generic pattern | What we do instead |
|---|---|
| ChatGPT wrapper that answers questions | LangGraph state machine that builds structured widget specs |
| NL-to-SQL against an unknown schema | Catalog-anchored SQL gen with dictionary context + sqlglot safety |
| AI making business decisions | Rules decide; AI narrates the reason codes into natural language |
| "Call Claude and hope" | Two-tier model policy, per-variant flat schemas, fail-loud error contracts |
| Prototype with synthetic data only | 3 Databricks metrics querying real Asurion claim data |
The scale path: one pattern, many Asurion domains¶
The Event -> Context -> AI -> Action -> Feedback architecture is domain-agnostic:
| Domain | Events | AI Role | Actions |
|---|---|---|---|
| Claims (built) | device.damage_detected, claim.submitted |
Explain repair-vs-replace decision | Approve repair, ship replacement |
| Fraud (designed) | Integrity signals, anomaly triggers | Explain risk escalation | Hold, escalate, monitor |
| Inventory (designed) | Warehouse velocity, stock alerts | Explain rebalancing logic | Rebalance, substitute, expedite |
| Retention (designed) | Engagement drops, churn signals | Explain outreach recommendation | Outreach, upgrade offer, discount |
| Product quality (designed) | Regional defect spikes, claim patterns | Explain recall triggers | Engineering alert, KB article, recall |
The Widget Builder Clarifier + metric catalog pattern means each new domain only needs: (1) new event types, (2) new metrics in metrics_catalog, (3) new reason codes in the decision engine. The dashboard, data binding, and AI explanation layer are reused as-is.