2.8 KiB
2.8 KiB
Context
The migration delivered feature parity, but efficiency work remains: backend query paths still do broad copies and whole-frame recomputation even when only slices are needed. At the same time, business constraints explicitly require full-table caching for resource and wip because those datasets are intentionally small and frequently reused. This design optimizes around that constraint rather than removing it.
Goals / Non-Goals
Goals:
- Keep
resourceandwipfull-table caches intact. - Reduce memory amplification from redundant cache representations.
- Replace repeated full merge/rebuild paths with incremental/indexed query plans where applicable.
- Increase reuse of browser-side compute modules for chart/table/filter/KPI derivations.
- Add measurable telemetry to verify latency and memory improvements.
Non-Goals:
- Rewriting all reporting endpoints to client-only mode.
- Removing Redis or existing layered cache strategy.
- Changing user-visible filter semantics or report outputs.
Decisions
-
Constrained cache strategy
- Decision: retain full-table snapshots for
resourceandwip; optimize surrounding representations and derivation paths. - Rationale: business-approved data-size profile and low complexity for frequent lookups.
- Decision: retain full-table snapshots for
-
Incremental + indexed path for heavy derived datasets
- Decision: add watermark/version-aware incremental refresh and per-column indexes for high-cardinality filters.
- Rationale: avoids repeated full recompute and lowers request tail latency.
-
Canonical in-process structure
- Decision: keep one canonical structure per cache domain and derive alternate views on demand.
- Rationale: reduces 2x/3x memory amplification from parallel representations.
-
Frontend compute module expansion
- Decision: extract reusable browser compute helpers for matrix/table/KPI transformations used across report pages.
- Rationale: shifts deterministic shaping work off backend and improves component reuse in Vite architecture.
-
Benchmark-driven acceptance
- Decision: add repeatable benchmark fixtures and telemetry thresholds as merge gates.
- Rationale: prevent subjective "performance improved" claims without measurable proof.
Risks / Trade-offs
- [Risk] Incremental sync correctness drift → Mitigation: version checksum validation and periodic full reconciliation jobs.
- [Risk] Browser compute can increase client CPU on low-end devices → Mitigation: bounded dataset chunking and fallback server aggregation path.
- [Risk] Refactor introduces subtle field-contract regressions → Mitigation: keep export/header contract tests and fixture comparisons.
- [Risk] Telemetry overhead → Mitigation: low-cost counters/histograms with sampling where needed.