Introduce a unified Seed→Lineage→Event pipeline replacing per-page Python BFS with Oracle CONNECT BY NOCYCLE queries, add staged /api/trace/* endpoints with rate limiting and L2 Redis caching, and wire progressive frontend loading via useTraceProgress composable. Key changes: - Add LineageEngine (split ancestors / merge sources / full genealogy) with QueryBuilder bind-param safety and batched IN clauses - Add EventFetcher with 6-domain support and L2 Redis cache - Add trace_routes Blueprint (seed-resolve, lineage, events) with profile dispatch, rate limiting, and Redis TTL=300s caching - Refactor query_tool_service to use LineageEngine and QueryBuilder, removing raw string interpolation (SQL injection fix) - Add rate limits and resolve cache to query_tool_routes - Integrate useTraceProgress into mid-section-defect with skeleton placeholders and fade-in transitions - Add lineageCache and on-demand lot lineage to query-tool - Add TraceProgressBar shared component - Remove legacy query-tool.js static script (3k lines) - Fix MatrixTable package column truncation (.slice(0,15) removed) - Archive unified-lineage-engine change, add trace-progressive-ui specs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
3.2 KiB
3.2 KiB
ADDED Requirements
Requirement: query-tool resolve functions SHALL use QueryBuilder bind params for all user input
All resolve_lots() family functions (_resolve_by_lot_id, _resolve_by_serial_number, _resolve_by_work_order) SHALL use QueryBuilder.add_in_condition() with bind parameters instead of _build_in_filter() string concatenation.
Scenario: Lot resolve with user-supplied values
- WHEN a resolve function receives user-supplied lot IDs, serial numbers, or work order names
- THEN the SQL query SHALL use
:p0, :p1, ...bind parameters viaQueryBuilder - THEN
read_sql_df()SHALL receivebuilder.params(never an empty{}dict for queries with user input) - THEN
_build_in_filter()and_build_in_clause()SHALL NOT be called
Scenario: Pure static SQL without user input
- WHEN a query contains no user-supplied values (e.g., static lookups)
- THEN empty params
{}is acceptable - THEN no
_build_in_filter()SHALL be used
Scenario: Zero residual references to deprecated functions
- WHEN the refactoring is complete
- THEN grep for
_build_in_filterand_build_in_clauseSHALL return zero results across the entire codebase
Requirement: query-tool routes SHALL apply rate limiting
All query-tool API endpoints SHALL apply per-client rate limiting using the existing configured_rate_limit mechanism.
Scenario: Resolve endpoint rate limit exceeded
- WHEN a client sends more than 10 requests to query-tool resolve endpoints within 60 seconds
- THEN the endpoint SHALL return HTTP 429 with a
Retry-Afterheader - THEN the resolve service function SHALL NOT be called
Scenario: History endpoint rate limit exceeded
- WHEN a client sends more than 20 requests to query-tool history endpoints within 60 seconds
- THEN the endpoint SHALL return HTTP 429 with a
Retry-Afterheader
Scenario: Association endpoint rate limit exceeded
- WHEN a client sends more than 20 requests to query-tool association endpoints within 60 seconds
- THEN the endpoint SHALL return HTTP 429 with a
Retry-Afterheader
Requirement: query-tool routes SHALL apply response caching
High-cost query-tool endpoints SHALL cache responses in L2 Redis.
Scenario: Resolve result caching
- WHEN a resolve request succeeds
- THEN the response SHALL be cached in L2 Redis with TTL = 60s
- THEN subsequent identical requests within TTL SHALL return cached result without Oracle query
Requirement: lot_split_merge_history SHALL support fast and full query modes
The lot_split_merge_history.sql query SHALL support two modes to balance traceability completeness vs performance.
Scenario: Fast mode (default)
- WHEN
full_historyquery parameter is absent orfalse - THEN the SQL SHALL include
TXNDATE >= ADD_MONTHS(SYSDATE, -6)time window andFETCH FIRST 500 ROWS ONLY - THEN query response time SHALL be ≤5s (P95)
Scenario: Full mode
- WHEN
full_history=truequery parameter is provided - THEN the SQL SHALL NOT include time window restriction
- THEN the query SHALL use
read_sql_df_slow(120s timeout) - THEN query response time SHALL be ≤60s (P95)