Skip to content

Integration Map

Every external service and technology integration in the Asurion Command Center, with file-path citations verifiable against the repo. Pair with architecture.md for the system view and whats-mocked-in-prototype.md for the real-vs-mocked accounting.

Summary

# Integration Status Purpose Primary Files
1 Redis REAL KPI caching, pub/sub fan-out, widget data cache backend/app/redis_client.py, backend/app/widgets/cache.py
2 PostgreSQL REAL OLTP state, metrics catalog, dashboard layouts db/init.sql, backend/app/db.py
3 AWS Bedrock REAL Two-tier LLM: intent extraction, spec synthesis, SQL gen, critic backend/app/widgets/llm.py, backend/app/sql_gen/generator.py
4 Databricks REAL Live metric queries (3 metrics), health probes backend/app/databricks/client.py
5 LangGraph REAL 8-node clarifier graph with HITL interrupts backend/app/widgets/graph.py
6 SSE REAL Streaming clarifier progress (POST-based) backend/app/widgets/routes.py
7 Docker Compose REAL 5 core services + optional Langfuse stack docker-compose.yml
8 Langfuse REAL Optional LLM observability (no-op when unconfigured) backend/app/telemetry.py
9 @babel/standalone REAL Browser-side TSX compilation for custom widgets frontend/src/widgets/CustomWidgetRenderer.tsx
10 sqlglot REAL SQL safety validation (SELECT-only, table allowlist, Databricks dialect) backend/app/sql_gen/safety.py
11 WebSocket REAL Real-time dashboard updates via Redis pub/sub backend/app/main.py:278
12 Eval Harness REAL Automated LLM codegen evaluation with 7 static checks + tsc backend/scripts/clarifier_eval/

1. Redis

Role: In-memory data store serving three distinct functions.

What it does

Function Description Key Code
KPI hot-key caching Recomputes dashboard KPIs from Postgres and mirrors them to Redis for sub-millisecond reads backend/app/kpis.py
Pub/sub fan-out Broadcasts dashboard state changes on pubsub:dashboard channel to all connected WebSocket clients backend/app/redis_client.py:18 (publish_dashboard)
Widget data cache Caches resolved widget data (Databricks query results) to avoid re-executing expensive warehouse queries backend/app/widgets/cache.py:150 (get_cached), :181 (set_cached)

Implementation evidence

  • Client singleton: backend/app/redis_client.py:11get_redis() returns a shared redis.Redis instance
  • Publish function: backend/app/redis_client.py:18publish_dashboard(channel, payload) JSON-encodes and publishes
  • Cache module: backend/app/widgets/cache.pyCachedEntry dataclass (:49), build_cache_key (:130), get_cached (:150), set_cached (:181)
  • Cache key schema: widget:data:{widget_id}:{spec_hash}:{intent_hash} — deterministic, content-addressed
  • Docker service: docker-compose.yml:20redis:7-alpine, port 6379

Config

REDIS_URL=redis://redis:6379/0

2. PostgreSQL

Role: Durable application state and audit trail.

What it does

Table Purpose
customers, devices Customer and device master data
issue_sessions Unified session fusing customer + device + ops context
claims, events Claims processing and event history
widgets Persisted widget specs created via the Clarifier
metrics_catalog First-class metric definitions (ADR-007) — 10 seeded, promotes one-off metrics atomically
dashboard_layouts JSONB layout document for drag-reorder (ADR-009)
sql_generation_log Audit trail of generated SQL queries
plans Coverage plans

Implementation evidence

  • Schema DDL: db/init.sql — canonical schema source of truth (no ORM migrations)
  • Connection factory: backend/app/db.py — SQLAlchemy engine + get_db() dependency
  • Lifespan migrations: backend/app/main.py — idempotent ALTER TABLE / CREATE TABLE IF NOT EXISTS at startup
  • Metric promotion: backend/app/widgets/routes.py_validate_and_promote_metric() atomically inserts one-off metrics into metrics_catalog at widget persist
  • Docker service: docker-compose.yml:2postgres:16-alpine, port 5432, health-checked

Config

DATABASE_URL=postgresql+psycopg2://cmdcenter:cmdcenter@db:5432/cmdcenter

3. AWS Bedrock

Role: LLM provider for all AI capabilities via boto3 runtime.

Two-tier model policy (ADR-011)

Tier Default Model Env Var Use Cases
Fast us.anthropic.claude-sonnet-4-6 BEDROCK_MODEL_ID Intent extraction, metric matching, SQL generation
Reasoning us.anthropic.claude-opus-4-7 BEDROCK_REASONING_MODEL_ID Spec synthesis, custom widget codegen, critic

Four call sites

Call Site Tier File Purpose
Intent extractor Fast backend/app/widgets/nodes/intent_extractor.py Parse user input into widget type + metric guess
Spec synthesizer Reasoning backend/app/widgets/nodes/spec_synthesizer.py Generate full WidgetSpec (KPI/chart/table/custom)
Critic Reasoning backend/app/widgets/nodes/critic.py Validate and improve generated specs
SQL generator Fast backend/app/sql_gen/generator.py Convert metric definition to safe Databricks SQL

Implementation evidence

  • LLM abstraction: backend/app/widgets/llm.py:68BedrockLlm dataclass wrapping boto3 bedrock-runtime client
  • Tier resolution: backend/app/widgets/llm.py_resolve_model_id() maps "fast" / "reasoning" to env vars
  • MockLlm fallback: backend/app/widgets/llm.py:157 — deterministic offline backend (ADR-008), activated only by BUILDER_MODE=offline
  • Tool-use JSON mode: All Bedrock calls use Anthropic tool-use (structured JSON output), never free-text parsing
  • Error contract: BuilderModeError raised on Bedrock failure in live mode — renders as error banner in UI, never silent MockLlm substitution (ADR-008)

Config

1
2
3
4
5
BUILDER_MODE=live          # "live" = Bedrock, "offline" = MockLlm
AWS_REGION=us-east-1
AWS_PROFILE=hackathon-async  # SSO; ~/.aws mounted into container
BEDROCK_MODEL_ID=us.anthropic.claude-sonnet-4-6
BEDROCK_REASONING_MODEL_ID=us.anthropic.claude-opus-4-7

4. Databricks SQL Warehouse

Role: Analytics data source for live Asurion claim data.

Three live metrics

Metric ID Table Query Shape
claim_volume_l3_asurion l3_asurion.ev_claim COUNT(DISTINCT claim_id) trailing 30d
claims_by_product_l3_asurion l3_asurion.ev_claim GROUP BY product_name trailing 30d
claim_status_mix_l3_asurion l3_asurion.ev_claim GROUP BY claim_status trailing 30d

Implementation evidence

  • Pooled client: backend/app/databricks/client.py:117DatabricksClient with connection pooling (configurable pool size, default 4)
  • Host normalization: backend/app/databricks/client.py:47_normalize_host() strips protocol prefix for connector compatibility
  • Exception classification: backend/app/databricks/exceptions.py — typed hierarchy (DatabricksTimeout, DatabricksAuthError, DatabricksUnavailable, DatabricksQueryError)
  • Health probe: backend/app/databricks/health.pyGET /v1/databricks/health returns 200 with rows_sampled or 503 with RFC 7807 problem document
  • Routing config: config/metric_routing.yaml — per-metric backend declaration (postgres or databricks)

Config

1
2
3
4
5
6
7
8
DATABRICKS_HOST=<workspace>.cloud.databricks.com
DATABRICKS_HTTP_PATH=/sql/1.0/warehouses/<id>
DATABRICKS_AUTH_METHOD=pat
DATABRICKS_TOKEN=<personal-access-token>
DATABRICKS_CATALOG=workspace
DATABRICKS_SCHEMA=asurion_prototype
DATABRICKS_QUERY_TIMEOUT_S=30
DATABRICKS_POOL_SIZE=4

5. LangGraph

Role: Stateful orchestration engine for the multi-turn Widget Clarifier.

8-node graph topology

graph LR
    START --> contextLoader
    contextLoader --> intentExtractor
    intentExtractor --> metricMatcher
    metricMatcher --> gapDetector
    gapDetector -->|gaps found| questionPrioritizer
    gapDetector -->|no gaps| specSynthesizer
    questionPrioritizer -->|HITL pause| specSynthesizer
    specSynthesizer --> critic
    critic -->|pass| END
    critic -->|revise| specUpdater
    specUpdater --> gapDetector

Implementation evidence

  • Graph builder: backend/app/widgets/graph.py:61build_graph() adds 8 nodes with conditional edges
  • HITL interrupt: backend/app/widgets/graph.py:109interrupt_before=["specSynthesizer"] pauses for human answers
  • State checkpointing: InMemorySaver (LangGraph) enables resume across SSE roundtrips
  • Conditional routing: Three routing functions at :38, :44, :54_route_after_gap_detector, _route_after_critic, _route_after_spec_updater
  • State schema: backend/app/widgets/state.pyWidgetClarifierState TypedDict with annotated channels

Node implementations

Node File Purpose
contextLoader backend/app/widgets/nodes/context_loader.py Load metric catalog + data dictionary
intentExtractor backend/app/widgets/nodes/intent_extractor.py Extract widget type + preliminary metric
metricMatcher backend/app/widgets/nodes/metric_matcher.py Match against catalog (auto-close metric gap)
gapDetector backend/app/widgets/nodes/gap_detector.py Identify missing fields
questionPrioritizer backend/app/widgets/nodes/question_prioritizer.py Rank questions for HITL
specSynthesizer backend/app/widgets/nodes/spec_synthesizer.py Two-stage spec generation
critic backend/app/widgets/nodes/critic.py Validate spec quality
specUpdater backend/app/widgets/nodes/spec_updater.py Apply human feedback

6. Server-Sent Events

Role: Streaming transport for the Widget Clarifier's multi-turn conversation.

Event taxonomy

Event Payload When
stage {stage: string, thread_id?: string} Each graph node starts
gaps {gaps: [...]} Gap detector finds missing fields
questions {questions: [...]} Prioritized questions for the operator
result {thread_id, interrupted, spec, ...} Final WidgetSpec generated
error {kind, message, thread_id?, builder_mode} LLM failure or validation error

Implementation evidence

  • SSE formatter: backend/app/widgets/routes.py:134_sse_event(name, data) formats as event: name\ndata: json\n\n
  • Start endpoint: POST /v1/widgets/clarify at backend/app/widgets/routes.py:146 — returns StreamingResponse with text/event-stream
  • Resume endpoint: POST /v1/widgets/clarify/respond at backend/app/widgets/routes.py:164 — accepts human answers, resumes graph
  • Frontend parser: frontend/src/widgets/api.ts:21 — uses fetch + reader.read() (not EventSource, because POST body is required)
  • Event translation: backend/app/widgets/runner.py — translates LangGraph stream events into SSE taxonomy

7. Docker Compose

Role: Local development and demo orchestration.

Core services

Service Image Port Health Check
db postgres:16-alpine 5432 pg_isready
redis redis:7-alpine 6379
api Custom (FastAPI) 8000
web Custom (Vite/nginx) 3080
docs spotify/techdocs:latest 3000

Volume mounts

Mount Mode Purpose
${HOME}/.aws:/root/.aws Read-write boto3 SSO credential cache refresh
./data-dictionary:/app/data-dictionary Read-only CSV data dictionary for SQL gen prompts
./config:/app/config Read-only metric_routing.yaml for boot validation

Implementation evidence

  • Service definitions: docker-compose.yml — all 5 services with depends_on ordering (api waits for db healthy + redis started)
  • Langfuse stack: docker-compose.langfuse.yml — optional 6-service observability stack (Postgres, ClickHouse, Redis, MinIO, web, worker)
  • Boot order: api depends on db (health-checked) and redis (started); web depends on api

8. Langfuse

Role: LLM observability and tracing (optional, no-op when unconfigured).

What it traces

Decorator File What it captures
@trace_node backend/app/telemetry.py:83 Per-node duration + state deltas in the clarifier graph
@trace_operation backend/app/telemetry.py:96 Bedrock invocations with model ID, token counts, latency
@observe backend/app/widgets/llm.py:81 Raw request/response on BedrockLlm.generate_json

Implementation evidence

  • Configuration: backend/app/telemetry.py:48configure_langfuse() initializes the SDK if credentials are present
  • No-op guard: backend/app/telemetry.py:35is_langfuse_configured() checks for LANGFUSE_SECRET_KEY; all decorators become identity wrappers when False
  • Shutdown: backend/app/telemetry.py:66shutdown_langfuse() flushes pending traces at process exit

Config

1
2
3
LANGFUSE_SECRET_KEY=sk-lf-...   # leave blank to disable
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_HOST=http://localhost:4001

9. @babel/standalone

Role: Browser-side TSX compilation for custom widget rendering.

Implementation evidence

  • Renderer: frontend/src/widgets/CustomWidgetRenderer.tsx — imports @babel/standalone, compiles generated TSX at render time
  • Sealed scope: Only React and Icon (Lucide) globals injected into the compiled module — no import statements allowed
  • Import stripping: Static check at persist time rejects any import or require in generated TSX
  • Error taxonomy: Compilation failures render as typed error cards (syntax error, runtime error, scope violation)
  • ADR-006: Documents the custom variant design and the iframe sandbox evolution path for production

10. sqlglot

Role: Structural SQL safety validation for generated Databricks SQL.

Safety checks

Check Description Code
Parse validation sqlglot.parse_one(sql, dialect='databricks') — rejects unparseable SQL backend/app/sql_gen/safety.py
SELECT-only Rejects any non-SELECT statement (DROP, INSERT, UPDATE, DELETE, ALTER, GRANT, etc.) :130 _has_forbidden_construct
Table allowlist Only permits queries against tables in the metric's source_table + joined tables from dictionary :148 _select_referenced_tables
LIMIT injection Appends LIMIT 5000 if no LIMIT clause present :209 _inject_limit
CTE validation Verifies CTE-defined names don't shadow allowlisted tables :166 _cte_names
Forbidden constructs Blocks CREATE, DROP, ALTER, GRANT, REVOKE, TRUNCATE, MERGE, COPY :43 forbidden list
Dialect enforcement Always dialect='databricks' — default dialect rejects valid Databricks syntax Throughout

Implementation evidence

  • Safety module: backend/app/sql_gen/safety.pySafetyResult model with per-check CheckResult items
  • Test coverage: backend/tests/test_sql_safety.py — 18 tests covering 25 adversarial SQL cases
  • ADR-PROTO-002: Anchored SQL generation — SQL gen is never free-text; always metric-catalog-bound

11. WebSocket

Role: Real-time dashboard updates from server to all connected browsers.

Data flow

sequenceDiagram
    participant Simulator as Event Simulator
    participant API as FastAPI
    participant PG as Postgres
    participant Redis as Redis
    participant WS as WebSocket
    participant Browser as Browser

    Simulator->>API: POST /v1/events
    API->>PG: Persist event
    API->>API: Recompute KPIs
    API->>Redis: publish_dashboard(state)
    Redis->>WS: pubsub:dashboard message
    WS->>Browser: JSON frame (KPI update)
    Browser->>Browser: Re-render dashboard tiles

Implementation evidence

  • WebSocket endpoint: backend/app/main.py:278dashboard_stream(ws) subscribes to pubsub:dashboard and fans out messages
  • Redis subscription: backend/app/main.py:289client.pubsub() with subscribe("pubsub:dashboard")
  • Graceful disconnect: backend/app/main.py:299 — catches WebSocketDisconnect, unsubscribes and closes pubsub
  • Frontend consumer: frontend/src/api.ts — WebSocket client connects to /v1/dashboard/stream

End-to-End Flow: User Asks for a Widget with Live Data

sequenceDiagram
    participant User as Operator
    participant UI as React Frontend
    participant SSE as SSE /v1/widgets/clarify
    participant LG as LangGraph (8 nodes)
    participant Bedrock as AWS Bedrock
    participant PG as Postgres
    participant Resolver as Data Resolver
    participant Cache as Redis Cache
    participant SQL as SQL Generator
    participant Safety as sqlglot Safety
    participant DB as Databricks

    User->>UI: "Show claim volume by region"
    UI->>SSE: POST /v1/widgets/clarify
    SSE->>LG: Start graph
    LG->>LG: contextLoader (load catalog + dictionary)
    LG->>Bedrock: intentExtractor (fast tier)
    Bedrock-->>LG: {type: "chart", metric_id_guess: "claim_volume"}
    LG->>PG: metricMatcher (lookup catalog)
    PG-->>LG: Match found, auto-close metric gap
    LG->>LG: gapDetector (check remaining gaps)
    LG-->>SSE: event: questions [{chart_kind}]
    SSE-->>UI: Stream questions
    UI-->>User: "What kind of chart?"
    User->>UI: "Bar chart"
    UI->>SSE: POST /v1/widgets/clarify/respond
    SSE->>LG: Resume graph
    LG->>Bedrock: specSynthesizer (reasoning tier)
    Bedrock-->>LG: WidgetSpec (chart variant)
    LG->>LG: critic (validate)
    LG-->>SSE: event: result {spec}
    SSE-->>UI: Stream result
    UI->>PG: POST /v1/widgets (persist + promote metric)
    UI->>Resolver: POST /v1/widgets/{id}/data
    Resolver->>Cache: Check Redis cache
    Cache-->>Resolver: Cache miss
    Resolver->>SQL: Generate SQL for metric
    SQL->>Bedrock: Bedrock (fast tier) + dictionary context
    Bedrock-->>SQL: {sql, tables_used, explanation}
    SQL->>Safety: sqlglot validate
    Safety-->>SQL: SafetyResult (passed)
    SQL->>DB: Execute query
    DB-->>SQL: Rows
    SQL-->>Resolver: Result
    Resolver->>Cache: Cache result (TTL from routing config)
    Resolver-->>UI: Data + source: "databricks"
    UI->>UI: Render chart + purple SourceBadge

12. Clarifier Eval Harness

Role: Automated evaluation pipeline for LLM-generated widget code quality.

What it does

The eval harness reuses the production LangGraph nodes (contextLoader, intentExtractor, gapDetector, questionPrioritizer) and swaps the back half for codegen-specific nodes (componentSynthesizer, componentCritic). This tests the full AI pipeline end-to-end against defined targets without modifying production code.

Eval pipeline

graph LR
    Target["EvalTarget<br/>(raw_input + auto_answers)"] --> Graph
    subgraph Graph ["Eval LangGraph"]
        contextLoader --> intentExtractor
        intentExtractor --> gapDetector
        gapDetector --> questionPrioritizer
        questionPrioritizer -->|auto-answer| componentSynthesizer
        componentSynthesizer -->|Bedrock call| componentCritic
    end
    Graph --> Validators
    subgraph Validators ["Validation"]
        Static["7 Static Checks"]
        TSC["tsc --noEmit"]
    end
    Validators --> Artifacts["artifacts/clarifier-eval/"]

7 static checks (backend/scripts/clarifier_eval/validators/static_checks.py)

Check What it catches
declares_props_type Missing interface or type declaration
declares_component Missing component function/const matching display_name
exports_something No export keyword
imports_in_allowlist Banned imports (lodash, moment, axios, next/, @/ aliases)
brace_balance Unbalanced {}, (), [] in JSX
severity_literals_referenced Severity keys declared but never used in TSX
tailwind_color_families_allowed Off-design-system Tailwind color families

Plus two bonus checks: no_any_or_ts_ignore (no bare any or @ts-ignore) and no_inline_style_prop (Tailwind-only per system prompt).

Artifact capture

Each eval run persists to artifacts/clarifier-eval/<target>/<timestamp>/:

File Content
transcript.json Full stage-by-stage execution log with timestamps
bedrock_request.json Raw Bedrock API request body
bedrock_response.json Raw Bedrock API response body
component_spec.json Generated ComponentSpec (display_name, tsx_source, imports, severity_color_map)
<DisplayName>.tsx Extracted TSX source
validation.json Static check + tsc results
summary.md Human-readable PASS/FAIL summary
reference_*.tsx Reference component for visual comparison

Implementation evidence

  • Eval graph: backend/scripts/clarifier_eval/graph.py — reuses 4 production nodes, adds 2 eval-only nodes
  • Runner: backend/scripts/clarifier_eval/runner.py — orchestrates graph, auto-answers HITL questions, persists artifacts
  • Static validators: backend/scripts/clarifier_eval/validators/static_checks.py — 7 deterministic code quality checks
  • TSC validator: backend/scripts/clarifier_eval/validators/tsc_check.pytsc --noEmit against generated TSX
  • Target definitions: backend/scripts/clarifier_eval/targets/alerts_feed.py — defines eval cases with auto-answers
  • CLI entry point: make clarifier-eval — runs against configured target, writes to artifacts/clarifier-eval/
  • Live artifacts: artifacts/clarifier-eval/alerts_feed/ — 2 timestamped eval runs with full Bedrock capture

Test Coverage for Integrations

Integration Test File Test Count
Redis (cache) backend/tests/test_data_resolver.py 19
PostgreSQL (layout) backend/tests/test_dashboard_layout.py 10
PostgreSQL (layout unit) backend/tests/test_dashboard_layout_unit.py 6
Databricks (client) backend/tests/test_databricks_client.py 4
Bedrock (LLM factory) backend/tests/test_llm_factory.py 7
SQL gen (generator) backend/tests/test_sql_generator.py 14
SQL gen (routes) backend/tests/test_sql_generator_routes.py 9
sqlglot (safety) backend/tests/test_sql_safety.py 18
Dictionary (loader) backend/tests/test_dictionary_loader.py 13
Routing (validator) backend/tests/test_routing_validator.py 6
Total backend 10 files 106

See technical-evidence.md for the complete test inventory including 15 frontend test suites (70 tests).