Architecture
Four current execution paths: metrics, replay, replay API, and replay UI.
Lighthouse is still early in the rebuild, so the architecture is kept intentionally direct: a read-only metrics path for cluster visibility plus shared replay paths for CLI and API execution.
Dashboard Flow
Browser
-> Next.js page
-> /api/dashboard-metrics
-> Prometheus fetch client
-> Prometheus HTTP API
-> Kafka metrics exporter
-> Kafka Admin metadata
Prometheus access stays on the server side. The browser never hits Prometheus directly.
Replay Flow
CLI input
-> Kafka admin validation
-> Optional timestamp-to-offset resolution
-> Source topic subscription
-> Partition seek to start offset
-> Optional write throttling
-> Per-message preview or replay
-> Destination topic write
-> Summary and progress logs
Replay Job Flow
Job create command
-> SQLite draft record
-> Job start command
-> Status moves to running
-> Replay engine executes
-> Progress updates written to SQLite
-> Status moves to completed or failed
Replay API Flow
HTTP client
-> /api/jobs or /api/jobs/:id/*
-> Replay job service
-> SQLite job store
-> In-process replay runner for start requests
-> Kafka replay engine
Replay UI Flow
Browser
-> Replay workspace
-> /api/jobs and /api/jobs/:id/*
-> Replay job service
-> SQLite job store
-> In-process replay runner
Local Sample Stack
- Three Kafka brokers in KRaft mode
- Topic initializer for
orders,payments, andorders-replay - Demo producer for recurring sample traffic
- Kafka metrics exporter
- Prometheus
- Next.js application
Why the Rebuild Uses This Shape
- It keeps production Prometheus and Kafka access behind explicit configuration.
- It keeps replay execution deterministic by resolving all jobs to concrete offsets.
- It gives the project a stable demo story that does not depend on hosted clusters.