MVP Acceptance
Phase 6.5 proves the replay MVP before alpha quality control.
The acceptance pass verifies that Lighthouse can replay bounded Kafka history safely through the CLI, API, UI, and local Docker sample path.
Current Status
Accepted locally on 2026-04-28 with the sample Kafka stack.
Start the sample Kafka stack before running the live Kafka acceptance checks:
npm run docker:sample:detached
Then run the full verification sequence:
npm run verify
npm run e2e
$env:KAFKA_INTEGRATION="1"
$env:KAFKA_BROKERS="localhost:19092,localhost:19093,localhost:19094"
npm.cmd run test:kafka:integration
npm run docker:config
Acceptance Checklist
| Area | Evidence |
|---|---|
| Offset replay | Live Kafka integration copies a bounded offset range into a destination topic and verifies replay headers. |
| Timestamp replay | Integration coverage writes timestamped records, resolves a time window, and replays only matching records. |
| Dry-run safety | Dry-run preview validates the replay plan without producing destination records. |
| Loop prevention | Validation rejects source and destination topic collisions before replay starts. |
| Kafka run modes | Docs cover local Docker Kafka, an external Kafka endpoint, and Confluent Cloud settings. |
| Docs parity | README and GitHub Pages describe the same replay modes, safety model, API routes, UI workflow, and limits. |
Acceptance Notes
- Dry-run preview uses the same replay validation as actual replay, but never creates a producer.
- Timestamp replay uses inclusive start and exclusive end semantics, then stores resolved offsets on the job.
- Cancellation is cooperative for API-started and CLI-started jobs.
- Multi-cluster replay, RBAC, Schema Registry integration, Kubernetes deployment, and exactly-once replay guarantees remain outside the MVP.
The live Kafka integration suite can emit a non-blocking
TimeoutNegativeWarning from the Kafka client path while
still passing. Treat it as alpha cleanup if it remains reproducible.