Pre-compute 6-dimension metric cubes from cached LOT-level DataFrames so interactive Pareto requests read compact snapshots instead of re-scanning detail rows on every filter change. Includes single-flight build guard, TTL/size guardrails, cross-filter exclude-self evaluation, safe legacy fallback, response metadata exposure, telemetry counters, and a 3-stage rollout plan (telemetry-only → build-enabled → read-through). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
3.5 KiB
3.5 KiB
MODIFIED Requirements
Requirement: Reject History API SHALL provide batch Pareto endpoint with cross-filter
The API SHALL provide a batch Pareto endpoint that returns all 6 dimension Pareto results in a single response, supporting cross-dimension filtering with exclude-self logic, and SHALL prefer materialized Pareto snapshots over full detail regrouping.
Scenario: Batch Pareto response structure
- WHEN
GET /api/reject-history/batch-paretois called with validquery_id - THEN response SHALL be
{ success: true, data: { dimensions: { reason: {...}, package: {...}, type: {...}, workflow: {...}, workcenter: {...}, equipment: {...} } } } - THEN each dimension object SHALL include
itemsarray with schema (reason,metric_value,pct,cumPct,MOVEIN_QTY,REJECT_TOTAL_QTY,DEFECT_QTY,count)
Scenario: Cross-filter exclude-self logic
- WHEN
sel_reason=A&sel_type=Xis provided - THEN reason Pareto SHALL be computed with type=X filter applied (but NOT reason=A filter)
- THEN type Pareto SHALL be computed with reason=A filter applied (but NOT type=X filter)
- THEN package/workflow/workcenter/equipment Paretos SHALL be computed with both reason=A AND type=X filters applied
Scenario: Empty selections return unfiltered Paretos
- WHEN batch-pareto is called with no
sel_*parameters - THEN all 6 dimensions SHALL return their full Pareto distribution (subject to
pareto_scope)
Scenario: Cache-only computation
- WHEN
query_iddoes not exist in cache - THEN the endpoint SHALL return HTTP 400 with error message indicating cache miss
- THEN the endpoint SHALL NOT fall back to Oracle query
Scenario: Materialized snapshot preferred
- WHEN a valid and fresh materialized Pareto snapshot exists for the request context
- THEN the endpoint SHALL return results from that snapshot
- THEN the endpoint SHALL avoid full lot-level DataFrame regrouping for the same request
Scenario: Materialized miss fallback behavior
- WHEN materialized snapshot is unavailable, stale, or build fails
- THEN the endpoint SHALL fall back to legacy cache DataFrame computation
- THEN the response schema and filter semantics SHALL remain unchanged
Scenario: Supplementary and policy filters apply
- WHEN batch-pareto is called with supplementary filters (packages, workcenter_groups, reason) and policy toggles
- THEN all 6 dimension Paretos SHALL be computed after applying policy and supplementary filters first (before cross-filter)
Scenario: Display scope (TOP20) support
- WHEN
pareto_display_scope=top20is provided - THEN applicable dimensions (type, workflow, equipment) SHALL truncate results to top 20 items after sorting
- WHEN
pareto_display_scopeis omitted orall - THEN all items SHALL be returned (subject to
pareto_scopefilter)
ADDED Requirements
Requirement: Reject History API SHALL expose materialized Pareto freshness metadata
The API SHALL surface stable metadata so operators and clients can identify whether Pareto responses came from materialized snapshots or fallback paths.
Scenario: Materialized hit metadata
- WHEN batch pareto response is served from materialized snapshot
- THEN response metadata SHALL indicate materialized source and snapshot freshness/version identifiers
Scenario: Fallback metadata
- WHEN response uses legacy fallback due to snapshot miss/stale/build failure
- THEN response metadata SHALL include a stable fallback reason code