feat(qc-gate): add Package column to LOT detail table + archive 3 completed changes
Add PACKAGE_LEF as a dedicated `package` field in the QC-GATE API payload and display it as a new column after LOT ID in LotTable.vue. Archive qc-gate-lot-package-column, historical-query-slow-connection, and msd-multifactor-backward-tracing changes with their delta specs synced to main specs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,2 @@
|
||||
schema: spec-driven
|
||||
created: 2026-03-02
|
||||
@@ -0,0 +1,28 @@
|
||||
## Context
|
||||
|
||||
The QC-GATE LOT detail table (`LotTable.vue`) displays 9 columns sourced from the `/api/qc-gate/summary` API. The backend (`qc_gate_service.py`) builds each lot payload from the Redis-cached WIP snapshot of `DW_MES_LOT_V`. The `PACKAGE_LEF` column already exists in the WIP data but is currently only used as a fallback for the `product` field — it is never exposed independently.
|
||||
|
||||
## Goals / Non-Goals
|
||||
|
||||
**Goals:**
|
||||
- Expose `PACKAGE_LEF` as a dedicated `package` field in each lot's API payload.
|
||||
- Display a "Package" column in the LOT detail table, positioned immediately after "LOT ID".
|
||||
- Keep the column sortable, consistent with existing column behavior.
|
||||
|
||||
**Non-Goals:**
|
||||
- Changing the Product column's fallback logic (it still falls through to `PACKAGE_LEF` when `PRODUCT` is null).
|
||||
- Adding chart-level grouping or filtering by package.
|
||||
- Modifying the DB view or Redis cache schema — `PACKAGE_LEF` is already available.
|
||||
|
||||
## Decisions
|
||||
|
||||
1. **Field name: `package`** — Maps directly to `PACKAGE_LEF` from `DW_MES_LOT_V`. Simple, descriptive, consistent with existing snake_case payload keys (`lot_id`, `wait_hours`).
|
||||
|
||||
2. **Column position: after LOT ID (index 1)** — The user explicitly requested "放在LOT ID之後". Insert into `HEADERS` array at index 1, shifting Product and subsequent columns right.
|
||||
|
||||
3. **No backend query changes** — `PACKAGE_LEF` is already present in the WIP cache DataFrame. We just read it in `_build_lot_payload()`, same as other fields.
|
||||
|
||||
## Risks / Trade-offs
|
||||
|
||||
- **Wide table on small screens** → The table already has horizontal scroll (`lot-table-scroll`); adding one more column is acceptable.
|
||||
- **Null values** → Many lots may have `PACKAGE_LEF = NULL`. The existing `formatValue()` helper already renders `'-'` for nulls, so no special handling needed.
|
||||
@@ -0,0 +1,24 @@
|
||||
## Why
|
||||
|
||||
The QC-GATE LOT detail table currently lacks a dedicated Package column. The `PACKAGE_LEF` field from `DW_MES_LOT_V` is only used as a fallback for the Product column, making it invisible when Product has a value. Users need to see the package (lead-frame) information alongside the LOT ID to quickly identify packaging context during QC-GATE monitoring.
|
||||
|
||||
## What Changes
|
||||
|
||||
- Add a new **Package** column to the QC-GATE LOT detail table, positioned immediately after the LOT ID column.
|
||||
- Expose the `PACKAGE_LEF` field from the WIP cache as a dedicated `package` field in the API response payload.
|
||||
- No existing columns are removed or reordered beyond the insertion point.
|
||||
|
||||
## Capabilities
|
||||
|
||||
### New Capabilities
|
||||
_(none — this is a column addition to an existing capability)_
|
||||
|
||||
### Modified Capabilities
|
||||
- `qc-gate-status`: Add `package` field to LOT payload and display it as a new column after LOT ID in the detail table.
|
||||
|
||||
## Impact
|
||||
|
||||
- **Backend**: `qc_gate_service.py` — `_build_lot_payload()` adds a `package` key.
|
||||
- **Frontend**: `LotTable.vue` — `HEADERS` array gains a new entry; template adds a `<td>` cell.
|
||||
- **API**: `/api/qc-gate/summary` response shape gains `package` in each lot object (additive, non-breaking).
|
||||
- **No DB changes**: `PACKAGE_LEF` already exists in `DW_MES_LOT_V` and is present in the Redis WIP cache.
|
||||
@@ -0,0 +1,50 @@
|
||||
## MODIFIED Requirements
|
||||
|
||||
### Requirement: System SHALL provide QC-GATE LOT status API
|
||||
The system SHALL provide an API endpoint that returns real-time LOT status for all QC-GATE stations, with wait time classification.
|
||||
|
||||
#### Scenario: Retrieve QC-GATE summary
|
||||
- **WHEN** user sends GET `/api/qc-gate/summary`
|
||||
- **THEN** the system SHALL return all LOTs whose `SPECNAME` contains both "QC" and "GATE" (case-insensitive)
|
||||
- **THEN** each LOT SHALL include `wait_hours` calculated as `(SYS_DATE - MOVEINTIMESTAMP)` in hours
|
||||
- **THEN** each LOT SHALL be classified into a time bucket: `lt_6h`, `6h_12h`, `12h_24h`, or `gt_24h`
|
||||
- **THEN** each LOT SHALL include a `package` field sourced from the `PACKAGE_LEF` column of `DW_MES_LOT_V`
|
||||
- **THEN** the response SHALL include per-station bucket counts and the full lot list
|
||||
|
||||
#### Scenario: QC-GATE data sourced from WIP cache
|
||||
- **WHEN** the API is called
|
||||
- **THEN** the system SHALL read from the existing WIP Redis cache (not direct Oracle query)
|
||||
- **THEN** the response SHALL include `cache_time` indicating the WIP snapshot timestamp
|
||||
|
||||
#### Scenario: No QC-GATE lots in cache
|
||||
- **WHEN** no LOTs match the QC-GATE SPECNAME pattern
|
||||
- **THEN** the system SHALL return an empty `stations` array with `cache_time`
|
||||
|
||||
### Requirement: QC-GATE report page SHALL display filterable LOT table
|
||||
The page SHALL display a table listing individual LOTs, with click-to-filter interaction from the bar chart.
|
||||
|
||||
#### Scenario: Default table display
|
||||
- **WHEN** the page loads
|
||||
- **THEN** the table SHALL show all QC-GATE LOTs sorted by wait time descending
|
||||
- **THEN** the table SHALL display a "Package" column immediately after the "LOT ID" column
|
||||
|
||||
#### Scenario: Package column displays PACKAGE_LEF value
|
||||
- **WHEN** a LOT has a non-null `PACKAGE_LEF` value
|
||||
- **THEN** the Package column SHALL display the `package` field value
|
||||
|
||||
#### Scenario: Package column with null value
|
||||
- **WHEN** a LOT has a null or empty `PACKAGE_LEF` value
|
||||
- **THEN** the Package column SHALL display a dash (`-`)
|
||||
|
||||
#### Scenario: Package column is sortable
|
||||
- **WHEN** user clicks the "Package" column header
|
||||
- **THEN** the table SHALL sort rows by package value alphabetically (ascending on first click, toggling on subsequent clicks)
|
||||
|
||||
#### Scenario: Click bar chart to filter
|
||||
- **WHEN** user clicks a specific segment of a bar (e.g., QC-GATE-DB's 6-12hr segment)
|
||||
- **THEN** the table SHALL filter to show only LOTs matching that station AND time bucket
|
||||
- **THEN** a filter indicator SHALL be visible showing the active filter
|
||||
|
||||
#### Scenario: Clear filter
|
||||
- **WHEN** user clicks the active filter indicator or clicks the same bar segment again
|
||||
- **THEN** the table SHALL return to showing all QC-GATE LOTs
|
||||
@@ -0,0 +1,13 @@
|
||||
## 1. Backend — Expose package field in API payload
|
||||
|
||||
- [x] 1.1 In `src/mes_dashboard/services/qc_gate_service.py`, add `'package': _safe_value(row.get('PACKAGE_LEF'))` to the dict returned by `_build_lot_payload()`
|
||||
|
||||
## 2. Frontend — Add Package column to LOT detail table
|
||||
|
||||
- [x] 2.1 In `frontend/src/qc-gate/components/LotTable.vue`, insert `{ key: 'package', label: 'Package' }` into the `HEADERS` array at index 1 (after LOT ID)
|
||||
- [x] 2.2 In the `<tbody>` template, add `<td>{{ formatValue(lot.package) }}</td>` after the `lot_id` cell
|
||||
|
||||
## 3. Verification
|
||||
|
||||
- [x] 3.1 Run existing backend tests to confirm no regressions
|
||||
- [x] 3.2 Run existing frontend tests/lint to confirm no regressions
|
||||
@@ -1,4 +1,7 @@
|
||||
## MODIFIED Requirements
|
||||
## Purpose
|
||||
Define stable requirements for hold-history-api.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Requirement: Hold History API SHALL provide daily trend data with Redis caching
|
||||
The Hold History API SHALL return trend, reason-pareto, duration, and list data from a single cached dataset via a two-phase query pattern (POST /query + GET /view). The old independent GET endpoints for trend, reason-pareto, duration, and list SHALL be replaced.
|
||||
@@ -60,3 +63,11 @@ The department endpoint SHALL remain as a separate Oracle query due to its uniqu
|
||||
- **WHEN** `GET /api/hold-history/department` is called
|
||||
- **THEN** it SHALL continue to execute its own Oracle query
|
||||
- **THEN** it SHALL NOT use the dataset cache
|
||||
|
||||
### Requirement: Database query execution path
|
||||
The hold-history service (`hold_history_service.py`) SHALL use `read_sql_df_slow` (dedicated connection) instead of `read_sql_df` (pooled connection) for all Oracle queries.
|
||||
|
||||
#### Scenario: Hold history queries use dedicated connection
|
||||
- **WHEN** any hold-history query is executed (trend, pareto, duration, list)
|
||||
- **THEN** it uses `read_sql_df_slow` which creates a dedicated Oracle connection outside the pool
|
||||
- **AND** the connection has a 300-second call_timeout (configurable)
|
||||
|
||||
@@ -1,4 +1,7 @@
|
||||
## MODIFIED Requirements
|
||||
## Purpose
|
||||
Define stable requirements for hold-history-page.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Requirement: Hold History page SHALL display a filter bar with date range and hold type
|
||||
The page SHALL provide a filter bar for selecting date range and hold type classification. On query, the page SHALL use a two-phase flow: POST /query returns queryId, subsequent filter changes use GET /view.
|
||||
@@ -32,3 +35,10 @@ The page SHALL provide a filter bar for selecting date range and hold type class
|
||||
#### Scenario: Department still uses separate API
|
||||
- **WHEN** department data needs to load or reload
|
||||
- **THEN** the page SHALL call `GET /api/hold-history/department` separately
|
||||
|
||||
### Requirement: Frontend API timeout
|
||||
The hold-history page SHALL use a 360-second API timeout (up from 60 seconds) for all Oracle-backed API calls.
|
||||
|
||||
#### Scenario: Large date range query completes
|
||||
- **WHEN** a user queries hold history for a long date range
|
||||
- **THEN** the frontend does not abort the request for at least 360 seconds
|
||||
|
||||
51
openspec/specs/msd-analysis-transparency/spec.md
Normal file
51
openspec/specs/msd-analysis-transparency/spec.md
Normal file
@@ -0,0 +1,51 @@
|
||||
## Purpose
|
||||
Define stable requirements for msd-analysis-transparency.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Requirement: Analysis page SHALL display a collapsible analysis summary panel
|
||||
The page SHALL show a summary panel above KPI cards explaining the query context, data scope, and attribution methodology.
|
||||
|
||||
#### Scenario: Summary panel rendering
|
||||
- **WHEN** backward analysis data is loaded
|
||||
- **THEN** a collapsible panel SHALL appear above the KPI cards
|
||||
- **THEN** the panel SHALL be expanded by default on first render
|
||||
- **THEN** the panel SHALL include a toggle control to collapse/expand
|
||||
|
||||
#### Scenario: Query context section
|
||||
- **WHEN** the summary panel is rendered
|
||||
- **THEN** it SHALL display the committed query parameters: detection station name, date range (or container mode info), and selected loss reasons (or「全部」if none selected)
|
||||
|
||||
#### Scenario: Data scope section
|
||||
- **WHEN** the summary panel is rendered
|
||||
- **THEN** it SHALL display:
|
||||
- 偵測站 LOT 總數 (total detection lots count)
|
||||
- 總投入 (total input qty in pcs)
|
||||
- 報廢 LOT 數 (lots with defects matching selected loss reasons)
|
||||
- 報廢總數 (total reject qty in pcs)
|
||||
- 血緣追溯涵蓋上游 LOT 數 (total unique ancestor count)
|
||||
|
||||
#### Scenario: Ancestor count from lineage response
|
||||
- **WHEN** lineage stage returns response
|
||||
- **THEN** the response SHALL include `total_ancestor_count` (number of unique ancestor CIDs across all seeds, excluding seeds themselves)
|
||||
- **THEN** the summary panel SHALL use this value for「血緣追溯涵蓋上游 LOT」
|
||||
|
||||
#### Scenario: Attribution methodology section
|
||||
- **WHEN** the summary panel is rendered
|
||||
- **THEN** it SHALL display a static text block explaining the attribution logic:
|
||||
- All LOTs passing through the detection station (including those with no defects) are included in analysis
|
||||
- Each LOT's upstream lineage (split/merge chain) is traced to identify associated upstream factors
|
||||
- Attribution rate = sum of associated LOTs' reject qty / sum of associated LOTs' input qty × 100%
|
||||
- The same defect can be attributed to multiple upstream factors (non-exclusive)
|
||||
- Pareto bar height = attributed defect count (with overlap), orange line = attributed defect rate
|
||||
|
||||
#### Scenario: Summary panel in container mode
|
||||
- **WHEN** query mode is container mode
|
||||
- **THEN** the query context section SHALL show the input type, resolved count, and not-found count instead of date range
|
||||
- **THEN** the data scope section SHALL still show LOT count and input/reject totals
|
||||
|
||||
#### Scenario: Summary panel collapsed state persistence
|
||||
- **WHEN** user collapses the summary panel
|
||||
- **THEN** the collapsed state SHALL persist within the current session (sessionStorage)
|
||||
- **WHEN** user triggers a new query
|
||||
- **THEN** the panel SHALL remain in its current collapsed/expanded state
|
||||
96
openspec/specs/msd-multifactor-attribution/spec.md
Normal file
96
openspec/specs/msd-multifactor-attribution/spec.md
Normal file
@@ -0,0 +1,96 @@
|
||||
## Purpose
|
||||
Define stable requirements for msd-multifactor-attribution.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Requirement: Backward tracing SHALL attribute defects to upstream materials
|
||||
The system SHALL compute material-level attribution using the same pattern as machine attribution: for each material `(part_name, lot_name)` consumed by detection lots or their ancestors, calculate the defect rate among associated detection lots.
|
||||
|
||||
#### Scenario: Materials attribution data flow
|
||||
- **WHEN** backward tracing events stage completes with `upstream_history` and `materials` domains
|
||||
- **THEN** the aggregation engine SHALL build a `material_key → detection_lots` mapping where `material_key = (MATERIALPARTNAME, MATERIALLOTNAME)`
|
||||
- **THEN** for each material key, `attributed_defect_rate = Σ(REJECTQTY of associated detection lots) / Σ(TRACKINQTY of associated detection lots) × 100`
|
||||
|
||||
#### Scenario: Materials domain requested in backward trace
|
||||
- **WHEN** the frontend executes backward tracing with `mid_section_defect` profile
|
||||
- **THEN** the events stage SHALL request domains `['upstream_history', 'materials']`
|
||||
- **THEN** the `materials` domain SHALL use the existing EventFetcher materials domain (querying `LOTMATERIALSHISTORY`)
|
||||
|
||||
#### Scenario: Materials Pareto chart rendering
|
||||
- **WHEN** materials attribution data is available
|
||||
- **THEN** the frontend SHALL render a Pareto chart titled「依原物料歸因」
|
||||
- **THEN** each bar SHALL represent a `material_part_name (material_lot_name)` combination
|
||||
- **THEN** the chart SHALL show Top 10 items sorted by defect_qty, with remaining items grouped as「其他」
|
||||
- **THEN** tooltip SHALL display: material name, material lot, defect count, input count, defect rate, cumulative %, and associated LOT count
|
||||
|
||||
#### Scenario: Material with no lot name
|
||||
- **WHEN** a material record has `MATERIALLOTNAME` as NULL or empty
|
||||
- **THEN** the material key SHALL use `material_part_name` only (without lot suffix)
|
||||
- **THEN** display label SHALL show the part name without parenthetical lot
|
||||
|
||||
### Requirement: Backward tracing SHALL attribute defects to wafer root ancestors
|
||||
The system SHALL compute root-ancestor-level attribution by identifying the split chain root for each detection lot and calculating defect rates per root.
|
||||
|
||||
#### Scenario: Root ancestor identification
|
||||
- **WHEN** lineage stage returns `ancestors` data (child_to_parent map)
|
||||
- **THEN** the backend SHALL identify root ancestors by traversing the parent chain for each seed until reaching a container with no further parent
|
||||
- **THEN** roots SHALL be returned as `{seed_container_id: root_container_name}` in the lineage response
|
||||
|
||||
#### Scenario: Root attribution calculation
|
||||
- **WHEN** root mapping is available
|
||||
- **THEN** the aggregation engine SHALL build a `root_container_name → detection_lots` mapping
|
||||
- **THEN** for each root, `attributed_defect_rate = Σ(REJECTQTY) / Σ(TRACKINQTY) × 100`
|
||||
|
||||
#### Scenario: Wafer root Pareto chart rendering
|
||||
- **WHEN** root attribution data is available
|
||||
- **THEN** the frontend SHALL render a Pareto chart titled「依源頭批次歸因」
|
||||
- **THEN** each bar SHALL represent a root ancestor `CONTAINERNAME`
|
||||
- **THEN** the chart SHALL show Top 10 items with cumulative percentage line
|
||||
|
||||
#### Scenario: Detection lot with no ancestors
|
||||
- **WHEN** a detection lot has no split chain ancestors (it is its own root)
|
||||
- **THEN** the root mapping SHALL map the lot to its own `CONTAINERNAME`
|
||||
|
||||
### Requirement: Backward Pareto layout SHALL show 5 charts in machine/material/wafer/reason/detection arrangement
|
||||
The backward tracing chart section SHALL display exactly 5 Pareto charts replacing the previous 6-chart layout.
|
||||
|
||||
#### Scenario: Chart grid layout
|
||||
- **WHEN** backward analysis data is rendered
|
||||
- **THEN** charts SHALL be arranged as:
|
||||
- Row 1: 依上游機台歸因 | 依原物料歸因
|
||||
- Row 2: 依源頭批次歸因 | 依不良原因
|
||||
- Row 3: 依偵測機台 (full width or single)
|
||||
- **THEN** the previous「依 WORKFLOW」「依 PACKAGE」「依 TYPE」charts SHALL NOT be rendered
|
||||
|
||||
### Requirement: Pareto charts SHALL support sort toggle between defect count and defect rate
|
||||
Each Pareto chart SHALL allow the user to switch between sorting by defect quantity and defect rate.
|
||||
|
||||
#### Scenario: Default sort order
|
||||
- **WHEN** a Pareto chart is first rendered
|
||||
- **THEN** bars SHALL be sorted by `defect_qty` descending (current behavior)
|
||||
|
||||
#### Scenario: Sort by rate toggle
|
||||
- **WHEN** user clicks the sort toggle to「依不良率」
|
||||
- **THEN** bars SHALL re-sort by `defect_rate` descending
|
||||
- **THEN** cumulative percentage line SHALL recalculate based on the new sort order
|
||||
- **THEN** the toggle SHALL visually indicate the active sort mode
|
||||
|
||||
#### Scenario: Sort toggle persistence within session
|
||||
- **WHEN** user changes sort mode on one chart
|
||||
- **THEN** the change SHALL only affect that specific chart (not all charts)
|
||||
|
||||
### Requirement: Pareto charts SHALL display an 80% cumulative reference line
|
||||
Each Pareto chart SHALL include a horizontal dashed line at the 80% cumulative mark.
|
||||
|
||||
#### Scenario: 80% markLine rendering
|
||||
- **WHEN** Pareto chart data is rendered with cumulative percentages
|
||||
- **THEN** the chart SHALL display a horizontal dashed line at y=80 on the percentage axis
|
||||
- **THEN** the line SHALL use a muted color (e.g., `#94a3b8`) with dotted style
|
||||
- **THEN** the line label SHALL display「80%」
|
||||
|
||||
### Requirement: Pareto chart tooltip SHALL include LOT count
|
||||
Each Pareto chart tooltip SHALL show the number of associated detection LOTs.
|
||||
|
||||
#### Scenario: Tooltip with LOT count
|
||||
- **WHEN** user hovers over a Pareto bar
|
||||
- **THEN** the tooltip SHALL display: factor name, 關聯 LOT count (with percentage of total), defect count, input count, defect rate, cumulative percentage
|
||||
80
openspec/specs/msd-suspect-context/spec.md
Normal file
80
openspec/specs/msd-suspect-context/spec.md
Normal file
@@ -0,0 +1,80 @@
|
||||
## Purpose
|
||||
Define stable requirements for msd-suspect-context.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Requirement: Detail table SHALL display suspect factor hit counts instead of raw upstream machine list
|
||||
The backward detail table SHALL replace the flat `UPSTREAM_MACHINES` string column with a structured suspect factor hit display that links to the current Pareto Top N.
|
||||
|
||||
#### Scenario: Suspect hit column rendering
|
||||
- **WHEN** backward detail table is rendered
|
||||
- **THEN** the「上游機台」column SHALL be replaced by a「嫌疑命中」column
|
||||
- **THEN** each cell SHALL show the names of upstream machines that appear in the current Pareto Top N suspect list, with a hit ratio (e.g., `WIRE-03, DIE-01 (2/5)`)
|
||||
|
||||
#### Scenario: Suspect list derived from Pareto Top N
|
||||
- **WHEN** the machine Pareto chart displays Top N machines (after any inline station/spec filters)
|
||||
- **THEN** the suspect list SHALL be the set of machine names from those Top N entries
|
||||
- **THEN** changing the Pareto inline filters SHALL update the suspect list and re-render the hit column
|
||||
|
||||
#### Scenario: Full match indicator
|
||||
- **WHEN** a LOT's upstream machines include all machines in the suspect list
|
||||
- **THEN** the cell SHALL display a visual indicator (e.g., star or highlight) marking full match
|
||||
|
||||
#### Scenario: No hits
|
||||
- **WHEN** a LOT's upstream machines include none of the suspect machines
|
||||
- **THEN** the cell SHALL display「-」
|
||||
|
||||
#### Scenario: Upstream machine count column
|
||||
- **WHEN** backward detail table is rendered
|
||||
- **THEN** the「上游LOT數」column SHALL remain as-is (showing ancestor count)
|
||||
- **THEN** a new「上游台數」column SHALL show the total number of unique upstream machines for that LOT
|
||||
|
||||
### Requirement: Backend detail table SHALL return structured upstream data
|
||||
The `_build_detail_table` function SHALL return upstream machines as a structured list instead of a flat comma-separated string.
|
||||
|
||||
#### Scenario: Structured upstream machines response
|
||||
- **WHEN** backward detail API returns LOT records
|
||||
- **THEN** each record's `UPSTREAM_MACHINES` field SHALL be a list of `{"station": "<workcenter_group>", "machine": "<equipment_name>"}` objects
|
||||
- **THEN** the flat comma-separated string SHALL no longer be returned in this field
|
||||
|
||||
#### Scenario: CSV export backward compatibility
|
||||
- **WHEN** CSV export is triggered for backward detail
|
||||
- **THEN** the `UPSTREAM_MACHINES` column in CSV SHALL flatten the structured list back to comma-separated `station/machine` format
|
||||
- **THEN** CSV format SHALL remain unchanged from current behavior
|
||||
|
||||
#### Scenario: Structured upstream materials response
|
||||
- **WHEN** materials attribution is available
|
||||
- **THEN** each detail record SHALL include an `UPSTREAM_MATERIALS` field: list of `{"part": "<material_part_name>", "lot": "<material_lot_name>"}` objects
|
||||
|
||||
#### Scenario: Structured wafer root response
|
||||
- **WHEN** root ancestor attribution is available
|
||||
- **THEN** each detail record SHALL include a `WAFER_ROOT` field: string with root ancestor `CONTAINERNAME`
|
||||
|
||||
### Requirement: Suspect machine context panel SHALL show machine details and recent maintenance
|
||||
Clicking a machine bar in the Pareto chart SHALL open a context popover showing machine attribution details and recent maintenance history.
|
||||
|
||||
#### Scenario: Context panel trigger
|
||||
- **WHEN** user clicks a bar in the「依上游機台歸因」Pareto chart
|
||||
- **THEN** a popover panel SHALL appear near the clicked bar
|
||||
- **WHEN** user clicks outside the popover or clicks the same bar again
|
||||
- **THEN** the popover SHALL close
|
||||
|
||||
#### Scenario: Context panel content - attribution summary
|
||||
- **WHEN** the context panel is displayed
|
||||
- **THEN** it SHALL show: equipment name, workcenter group, resource family (RESOURCEFAMILYNAME), attributed defect rate, attributed defect count, attributed input count, associated LOT count
|
||||
|
||||
#### Scenario: Context panel content - recent maintenance
|
||||
- **WHEN** the context panel is displayed
|
||||
- **THEN** it SHALL fetch recent JOB records for the machine's equipment_id (last 30 days)
|
||||
- **THEN** it SHALL display up to 5 most recent JOB records showing: JOBID, JOBSTATUS, JOBMODELNAME, CREATEDATE, COMPLETEDATE
|
||||
- **WHEN** the machine has no recent JOB records
|
||||
- **THEN** the maintenance section SHALL display「近 30 天無維修紀錄」
|
||||
|
||||
#### Scenario: Context panel loading state
|
||||
- **WHEN** maintenance data is being fetched
|
||||
- **THEN** the maintenance section SHALL show a loading indicator
|
||||
- **THEN** the attribution summary section SHALL render immediately (data already available from attribution)
|
||||
|
||||
#### Scenario: Context panel for non-machine charts
|
||||
- **WHEN** user clicks bars in other Pareto charts (materials, wafer root, loss reason, detection machine)
|
||||
- **THEN** no context panel SHALL appear (machine context only)
|
||||
@@ -1,4 +1,7 @@
|
||||
## MODIFIED Requirements
|
||||
## Purpose
|
||||
Define stable requirements for progressive-trace-ux.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Requirement: query-tool lineage tab SHALL load on-demand
|
||||
The query-tool lineage tree SHALL auto-fire lineage API calls after lot resolution with concurrency-limited parallel requests and progressive rendering, while preserving on-demand expand/collapse for tree navigation.
|
||||
@@ -14,3 +17,10 @@ The query-tool lineage tree SHALL auto-fire lineage API calls after lot resoluti
|
||||
- **THEN** each lot's lineage data SHALL be preserved independently (not re-fetched)
|
||||
- **WHEN** a new resolve query is executed
|
||||
- **THEN** all cached lineage data SHALL be cleared
|
||||
|
||||
### Requirement: Trace stage timeout
|
||||
The `useTraceProgress` composable's `DEFAULT_STAGE_TIMEOUT_MS` SHALL be 360000 (360 seconds) to accommodate large-scale trace operations.
|
||||
|
||||
#### Scenario: Large trace operation completes
|
||||
- **WHEN** a trace stage (seed-resolve, lineage, or events) takes up to 300 seconds
|
||||
- **THEN** the frontend does not abort the stage request
|
||||
|
||||
@@ -12,6 +12,7 @@ The system SHALL provide an API endpoint that returns real-time LOT status for a
|
||||
- **THEN** the system SHALL return all LOTs whose `SPECNAME` contains both "QC" and "GATE" (case-insensitive)
|
||||
- **THEN** each LOT SHALL include `wait_hours` calculated as `(SYS_DATE - MOVEINTIMESTAMP)` in hours
|
||||
- **THEN** each LOT SHALL be classified into a time bucket: `lt_6h`, `6h_12h`, `12h_24h`, or `gt_24h`
|
||||
- **THEN** each LOT SHALL include a `package` field sourced from the `PACKAGE_LEF` column of `DW_MES_LOT_V`
|
||||
- **THEN** the response SHALL include per-station bucket counts and the full lot list
|
||||
|
||||
#### Scenario: QC-GATE data sourced from WIP cache
|
||||
@@ -53,6 +54,19 @@ The page SHALL display a table listing individual LOTs, with click-to-filter int
|
||||
#### Scenario: Default table display
|
||||
- **WHEN** the page loads
|
||||
- **THEN** the table SHALL show all QC-GATE LOTs sorted by wait time descending
|
||||
- **THEN** the table SHALL display a "Package" column immediately after the "LOT ID" column
|
||||
|
||||
#### Scenario: Package column displays PACKAGE_LEF value
|
||||
- **WHEN** a LOT has a non-null `PACKAGE_LEF` value
|
||||
- **THEN** the Package column SHALL display the `package` field value
|
||||
|
||||
#### Scenario: Package column with null value
|
||||
- **WHEN** a LOT has a null or empty `PACKAGE_LEF` value
|
||||
- **THEN** the Package column SHALL display a dash (`-`)
|
||||
|
||||
#### Scenario: Package column is sortable
|
||||
- **WHEN** user clicks the "Package" column header
|
||||
- **THEN** the table SHALL sort rows by package value alphabetically (ascending on first click, toggling on subsequent clicks)
|
||||
|
||||
#### Scenario: Click bar chart to filter
|
||||
- **WHEN** user clicks a specific segment of a bar (e.g., QC-GATE-DB's 6-12hr segment)
|
||||
|
||||
@@ -1,4 +1,7 @@
|
||||
## ADDED Requirements
|
||||
## Purpose
|
||||
Define stable requirements for query-tool-equipment.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Requirement: Equipment tab SHALL provide equipment selection with date range filtering
|
||||
The equipment tab SHALL allow selecting multiple equipment and a date range for all sub-tab queries.
|
||||
@@ -70,3 +73,10 @@ Every equipment sub-tab SHALL have its own export button calling the existing ex
|
||||
- **WHEN** the user clicks export on any equipment sub-tab
|
||||
- **THEN** the system SHALL call `POST /api/query-tool/export-csv` with the appropriate `export_type` (equipment_lots, equipment_jobs, equipment_rejects)
|
||||
- **THEN** the exported params SHALL include the current equipment_ids/equipment_names and date range
|
||||
|
||||
### Requirement: Frontend API timeout
|
||||
The query-tool equipment query, lot detail, lot jobs table, lot resolve, lot lineage, and reverse lineage composables SHALL use a 360-second API timeout for all Oracle-backed API calls.
|
||||
|
||||
#### Scenario: Equipment period query completes
|
||||
- **WHEN** a user queries equipment history for a long period
|
||||
- **THEN** the frontend does not abort the request for at least 360 seconds
|
||||
|
||||
@@ -1,4 +1,7 @@
|
||||
## ADDED Requirements
|
||||
## Purpose
|
||||
Define stable requirements for query-tool-lot-trace.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Requirement: Query-tool page SHALL use tab-based layout separating LOT tracing from equipment queries
|
||||
The query-tool page SHALL present two top-level tabs: "LOT 追蹤" and "設備查詢", each with independent state and UI.
|
||||
@@ -150,3 +153,11 @@ The legacy `frontend/src/query-tool/main.js` (448L vanilla JS) and `frontend/src
|
||||
- **WHEN** the rewrite is complete
|
||||
- **THEN** `frontend/src/query-tool/main.js` SHALL contain only the Vite entry point (createApp + mount)
|
||||
- **THEN** `frontend/src/query-tool/style.css` SHALL be deleted (all styling via Tailwind)
|
||||
|
||||
### Requirement: Slow query timeout configuration
|
||||
The query-tool service `read_sql_df_slow` call for full split/merge history SHALL use the config-driven default timeout instead of a hardcoded 120-second timeout.
|
||||
|
||||
#### Scenario: Full history query uses config timeout
|
||||
- **WHEN** `full_history=True` split/merge query is executed
|
||||
- **THEN** it uses `read_sql_df_slow` with the default timeout from `DB_SLOW_CALL_TIMEOUT_MS` (300s)
|
||||
- **AND** the hardcoded `timeout_seconds=120` parameter is removed
|
||||
|
||||
@@ -115,3 +115,12 @@ The API SHALL rate-limit high-cost endpoints to protect Oracle and application r
|
||||
- **WHEN** `/api/reject-history/list` or `/api/reject-history/export` receives excessive requests
|
||||
- **THEN** configured rate limiting SHALL reject requests beyond the threshold within the time window
|
||||
|
||||
### Requirement: Database query execution path
|
||||
The reject-history service (`reject_history_service.py` and `reject_dataset_cache.py`) SHALL use `read_sql_df_slow` (dedicated connection) instead of `read_sql_df` (pooled connection) for all Oracle queries.
|
||||
|
||||
#### Scenario: Primary query uses dedicated connection
|
||||
- **WHEN** the reject-history primary query is executed
|
||||
- **THEN** it uses `read_sql_df_slow` which creates a dedicated Oracle connection outside the pool
|
||||
- **AND** the connection has a 300-second call_timeout (configurable)
|
||||
- **AND** the connection is subject to the global slow query semaphore
|
||||
|
||||
|
||||
@@ -195,3 +195,10 @@ The page template SHALL delegate sections to focused sub-components, following t
|
||||
- **THEN** `App.vue` SHALL hold all reactive state and API logic
|
||||
- **THEN** sub-components SHALL receive data via props and communicate via events
|
||||
|
||||
### Requirement: Frontend API timeout
|
||||
The reject-history page SHALL use a 360-second API timeout (up from 60 seconds) for all Oracle-backed API calls.
|
||||
|
||||
#### Scenario: Large date range query completes
|
||||
- **WHEN** a user queries reject history for a long date range
|
||||
- **THEN** the frontend does not abort the request for at least 360 seconds
|
||||
|
||||
|
||||
@@ -1,4 +1,7 @@
|
||||
## MODIFIED Requirements
|
||||
## Purpose
|
||||
Define stable requirements for resource-history-page.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Requirement: Resource History page SHALL support date range and granularity selection
|
||||
The page SHALL allow users to specify time range and aggregation granularity. On query, the page SHALL use a two-phase flow: POST /query returns queryId, subsequent filter changes use GET /view.
|
||||
@@ -44,3 +47,18 @@ The page SHALL show a three-level expandable table derived from the cached datas
|
||||
- **WHEN** detail data is derived from the cached DataFrame
|
||||
- **THEN** a tree table SHALL display with the same columns and hierarchy as before
|
||||
- **THEN** data SHALL be derived in-memory from the cached DataFrame, not from a separate Oracle query
|
||||
|
||||
### Requirement: Database query execution path
|
||||
The resource-history service (`resource_history_service.py`) SHALL use `read_sql_df_slow` (dedicated connection) instead of `read_sql_df` (pooled connection) for all Oracle queries.
|
||||
|
||||
#### Scenario: Summary parallel queries use dedicated connections
|
||||
- **WHEN** the resource-history summary query executes 3 parallel queries via ThreadPoolExecutor
|
||||
- **THEN** each query uses `read_sql_df_slow` and acquires a semaphore slot
|
||||
- **AND** all 3 queries complete and release their slots
|
||||
|
||||
### Requirement: Frontend timeout
|
||||
The resource-history page frontend SHALL use a 360-second API timeout for all Oracle-backed API calls.
|
||||
|
||||
#### Scenario: Large date range query completes
|
||||
- **WHEN** a user queries resource history for a 2-year date range
|
||||
- **THEN** the frontend does not abort the request for at least 360 seconds
|
||||
|
||||
56
openspec/specs/slow-query-concurrency-control/spec.md
Normal file
56
openspec/specs/slow-query-concurrency-control/spec.md
Normal file
@@ -0,0 +1,56 @@
|
||||
## Purpose
|
||||
Define stable requirements for slow-query-concurrency-control.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Requirement: Configurable slow query timeout
|
||||
The system SHALL read `DB_SLOW_CALL_TIMEOUT_MS` from environment/config to determine the default `call_timeout` for `read_sql_df_slow`. The default value SHALL be 300000 (300 seconds).
|
||||
|
||||
#### Scenario: Default timeout when no env var set
|
||||
- **WHEN** `DB_SLOW_CALL_TIMEOUT_MS` is not set in environment
|
||||
- **THEN** `read_sql_df_slow` uses 300 seconds as call_timeout
|
||||
|
||||
#### Scenario: Custom timeout from env var
|
||||
- **WHEN** `DB_SLOW_CALL_TIMEOUT_MS` is set to 180000
|
||||
- **THEN** `read_sql_df_slow` uses 180 seconds as call_timeout
|
||||
|
||||
#### Scenario: Caller overrides timeout
|
||||
- **WHEN** caller passes `timeout_seconds=120` to `read_sql_df_slow`
|
||||
- **THEN** the function uses 120 seconds regardless of config value
|
||||
|
||||
### Requirement: Semaphore-based concurrency control
|
||||
The system SHALL use a global `threading.Semaphore` to limit the number of concurrent `read_sql_df_slow` executions. The limit SHALL be configurable via `DB_SLOW_MAX_CONCURRENT` with a default of 3.
|
||||
|
||||
#### Scenario: Concurrent queries within limit
|
||||
- **WHEN** 2 slow queries are running and a 3rd is submitted (limit=3)
|
||||
- **THEN** the 3rd query proceeds immediately
|
||||
|
||||
#### Scenario: Concurrent queries exceed limit
|
||||
- **WHEN** 3 slow queries are running and a 4th is submitted (limit=3)
|
||||
- **THEN** the 4th query waits up to 60 seconds for a slot
|
||||
- **AND** if no slot becomes available, raises RuntimeError with message indicating all slots are busy
|
||||
|
||||
#### Scenario: Semaphore release on query failure
|
||||
- **WHEN** a slow query raises an exception during execution
|
||||
- **THEN** the semaphore slot is released in the finally block
|
||||
|
||||
### Requirement: Slow query active count diagnostic
|
||||
The system SHALL expose the current number of active slow queries via `get_slow_query_active_count()` and include it in `get_pool_status()` as `slow_query_active`.
|
||||
|
||||
#### Scenario: Active count in pool status
|
||||
- **WHEN** 2 slow queries are running
|
||||
- **THEN** `get_pool_status()` returns `slow_query_active: 2`
|
||||
|
||||
### Requirement: Gunicorn timeout accommodates slow queries
|
||||
The Gunicorn worker timeout SHALL be at least 360 seconds to accommodate the maximum slow query duration (300s) plus overhead.
|
||||
|
||||
#### Scenario: Long query does not kill worker
|
||||
- **WHEN** a slow query takes 280 seconds to complete
|
||||
- **THEN** the Gunicorn worker does not timeout and the response is delivered
|
||||
|
||||
### Requirement: Config settings in all environments
|
||||
All environment configs (Config, DevelopmentConfig, ProductionConfig, TestingConfig) SHALL define `DB_SLOW_CALL_TIMEOUT_MS` and `DB_SLOW_MAX_CONCURRENT`.
|
||||
|
||||
#### Scenario: Testing config uses short timeout
|
||||
- **WHEN** running in testing environment
|
||||
- **THEN** `DB_SLOW_CALL_TIMEOUT_MS` defaults to 10000 and `DB_SLOW_MAX_CONCURRENT` defaults to 1
|
||||
Reference in New Issue
Block a user