Skip to content

What's Real vs What's Deferred

The pitch in one line

11 production-grade components implemented and tested. 6 items deferred with documented production paths.


What's REAL

Implemented, tested, and evidenced -- every row has a file you can open

%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#10b981', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#059669', 'lineColor': '#10b981', 'fontFamily': 'Inter, system-ui, sans-serif'}}}%%
graph LR
    classDef real fill:#10b981,stroke:#059669,color:#fff,stroke-width:2px,font-weight:bold
    classDef count fill:#1e3a5f,stroke:#0d253f,color:#fff,stroke-width:2px,font-weight:bold

    AI["fa:fa-brain Bedrock AI<br/>4 call sites, 2 tiers"]:::real
    DB["fa:fa-warehouse Databricks<br/>3 live metrics"]:::real
    SQL["fa:fa-shield-check SQL Safety<br/>18 tests, 25 cases"]:::real
    LG["fa:fa-sitemap LangGraph<br/>8-node Clarifier"]:::real
    CW["fa:fa-code Custom Widgets<br/>TSX codegen"]:::real
    MC["fa:fa-list Metric Catalog<br/>10 seeded metrics"]:::real

    TESTS["fa:fa-check-double 176 tests<br/>106 backend + 70 frontend"]:::count
    ADRS["fa:fa-file-text 18 ADRs<br/>every deviation documented"]:::count

    AI --- DB --- SQL
    LG --- CW --- MC
    TESTS --- ADRS
Component Evidence
Bedrock AI -- 4 call sites, 2 tiers Live LLM calls for intent extraction, spec synthesis, critic, SQL generation
3 Databricks metrics Real Asurion claim data: claim_volume, claims_by_product, claim_status_mix
SQL safety layer 18 tests, 25 adversarial SQL cases, sqlglot with Databricks dialect
Metric catalog 10 seeded metrics, atomic promotion at widget persist
Component Evidence
LangGraph Clarifier 8-node graph, HITL interrupts, SSE streaming, metric auto-match
Custom widget codegen @babel/standalone TSX compile in sealed scope (React + Icon only)
Eval harness 7 static checks + tsc --noEmit, full Bedrock request/response capture
Component Evidence
Dashboard layout Drag-reorder via JSONB doc, round-trip tested
Redis caching KPI hot keys + pub/sub fan-out + widget data cache with TTL
Test suite 106 backend + 70 frontend tests passing
18 ADRs Every deviation documented with context + consequences

What's DEFERRED

Designed and documented -- not built in a 1-day prototype

%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#64748b', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#475569', 'lineColor': '#94a3b8', 'fontFamily': 'Inter, system-ui, sans-serif'}}}%%
graph LR
    classDef deferred fill:#64748b,stroke:#475569,color:#fff,stroke-width:2px
    classDef phase2 fill:#3b82f6,stroke:#1d4ed8,color:#fff,stroke-width:2px

    K["fa:fa-stream Kafka"]:::deferred --> KP["Phase 2<br/>same event handler"]:::phase2
    M["fa:fa-layer-group Medallion"]:::deferred --> MP["Phase 2<br/>Bronze/Silver/Gold"]:::phase2
    V["fa:fa-database Vector DB"]:::deferred --> VP["Phase 2<br/>when dict > context"]:::phase2
Component Today Production path
Kafka FastAPI HTTP ingest (same contract) Phase 2: Kafka → same event handler, zero code change
Bronze/Silver/Gold pipeline Prototype queries existing tables Phase 2: Databricks medallion architecture
Model serving (Mosaic AI) Rules + Bedrock rationale Phase B: trained models for scoring
Vector DB Full dictionary fits in context (52 tables) Phase 2: when dictionary exceeds context limit
iframe sandbox @babel/standalone in-process Phase 2: CSP + iframe per ADR-006
Connected systems Visual stubs (Slack, Confluence) Phase 2: webhook + OAuth integrations

The honest line

Our principle

We cut integrations to keep one credible path live: screen damage → repair with visible KPI feedback, backed by real Asurion data from Databricks.

Everything deferred is designed and documented. The deferral reasons are all "we had one day" -- not "we couldn't figure it out."

See whats-mocked-in-prototype.md for the complete accounting.


The scoreboard

%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#1e3a5f', 'fontFamily': 'Inter, system-ui, sans-serif'}}}%%
pie title Implementation Status
    "Implemented & Tested" : 11
    "Designed & Documented" : 6

Speaker notes (30-45s)
  • Lead with what's real -- the "implemented" column is nearly 2x the "deferred" column.
  • Name the deferrals proactively. "Judges respect honesty. They penalize pretending."
  • Key line: "The deferral reasons are all 'we had one day' -- not 'we couldn't figure it out.' Each one has a documented production path and a specific ADR."
  • If a judge asks about any deferral, point to the specific ADR or doc -- they are all linked.
  • The pie chart is a conversation starter: 11 real, 6 deferred. That ratio for a 1-day prototype is the story.