feat: dataset cache for hold/resource history + slow connection migration

Two changes combined:

1. historical-query-slow-connection: Migrate all historical query pages
   to read_sql_df_slow with semaphore concurrency control (max 3),
   raise DB slow timeout to 300s, gunicorn timeout to 360s, and
   unify frontend timeouts to 360s for all historical pages.

2. hold-resource-history-dataset-cache: Convert hold-history and
   resource-history from multi-query to single-query + dataset cache
   pattern (L1 ProcessLevelCache + L2 Redis parquet/base64, TTL=900s).
   Replace old GET endpoints with POST /query + GET /view two-phase
   API. Frontend auto-retries on 410 cache_expired.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
egg
2026-02-25 13:15:02 +08:00
parent cd061e0cfd
commit 71c8102de6
64 changed files with 3806 additions and 1442 deletions

View File

@@ -0,0 +1,64 @@
## ADDED Requirements
### Requirement: Hold dataset cache SHALL execute a single Oracle query and cache the result
The hold_dataset_cache module SHALL query Oracle once for the full hold/release fact set and cache it for subsequent derivations.
#### Scenario: Primary query execution and caching
- **WHEN** `execute_primary_query()` is called with date range and hold_type parameters
- **THEN** a deterministic `query_id` SHALL be computed from the primary params (start_date, end_date) using SHA256
- **THEN** if a cached DataFrame exists for this query_id (L1 or L2), it SHALL be used without querying Oracle
- **THEN** if no cache exists, a single Oracle query SHALL fetch all hold/release records from `DW_MES_HOLDRELEASEHISTORY` for the date range (all hold_types)
- **THEN** the result DataFrame SHALL be stored in both L1 (ProcessLevelCache) and L2 (Redis as parquet/base64)
- **THEN** the response SHALL include `query_id`, trend, reason_pareto, duration, and list page 1
#### Scenario: Cache TTL and eviction
- **WHEN** a DataFrame is cached
- **THEN** the cache TTL SHALL be 900 seconds (15 minutes)
- **THEN** L1 cache max_size SHALL be 8 entries with LRU eviction
- **THEN** the Redis namespace SHALL be `hold_dataset`
### Requirement: Hold dataset cache SHALL derive trend data from cached DataFrame
The module SHALL compute daily trend aggregations from the cached fact set.
#### Scenario: Trend derivation from cache
- **WHEN** `apply_view()` is called with a valid query_id
- **THEN** trend data SHALL be derived by grouping the cached DataFrame by date
- **THEN** the 07:30 shift boundary rule SHALL be applied
- **THEN** all three hold_type variants (quality, non_quality, all) SHALL be computed from the same DataFrame
- **THEN** hold_type filtering SHALL be applied in-memory without re-querying Oracle
### Requirement: Hold dataset cache SHALL derive reason Pareto from cached DataFrame
The module SHALL compute reason distribution from the cached fact set.
#### Scenario: Reason Pareto derivation
- **WHEN** `apply_view()` is called with hold_type filter
- **THEN** reason Pareto SHALL be derived by grouping the filtered DataFrame by HOLDREASONNAME
- **THEN** items SHALL include count, qty, pct, and cumPct
- **THEN** items SHALL be sorted by count descending
### Requirement: Hold dataset cache SHALL derive duration distribution from cached DataFrame
The module SHALL compute hold duration buckets from the cached fact set.
#### Scenario: Duration derivation
- **WHEN** `apply_view()` is called with hold_type filter
- **THEN** duration distribution SHALL be derived from records where RELEASETXNDATE IS NOT NULL
- **THEN** 4 buckets SHALL be computed: <4h, 4-24h, 1-3d, >3d
- **THEN** each bucket SHALL include count and pct
### Requirement: Hold dataset cache SHALL derive paginated list from cached DataFrame
The module SHALL provide paginated detail records from the cached fact set.
#### Scenario: List pagination from cache
- **WHEN** `apply_view()` is called with page and per_page parameters
- **THEN** the cached DataFrame SHALL be filtered by hold_type and optional reason filter
- **THEN** records SHALL be sorted by HOLDTXNDATE descending
- **THEN** pagination SHALL be applied in-memory (offset + limit on the sorted DataFrame)
- **THEN** response SHALL include items and pagination metadata (page, perPage, total, totalPages)
### Requirement: Hold dataset cache SHALL handle cache expiry gracefully
The module SHALL return appropriate signals when cache has expired.
#### Scenario: Cache expired during view request
- **WHEN** `apply_view()` is called with a query_id whose cache has expired
- **THEN** the response SHALL return `{ success: false, error: "cache_expired" }`
- **THEN** the HTTP status SHALL be 410 (Gone)

View File

@@ -1,172 +1,62 @@
## ADDED Requirements
## MODIFIED Requirements
### Requirement: Hold History API SHALL provide daily trend data with Redis caching
The API SHALL return daily aggregated hold/release metrics for the selected date range.
The Hold History API SHALL return trend, reason-pareto, duration, and list data from a single cached dataset via a two-phase query pattern (POST /query + GET /view). The old independent GET endpoints for trend, reason-pareto, duration, and list SHALL be replaced.
#### Scenario: Trend endpoint returns all three hold types
- **WHEN** `GET /api/hold-history/trend?start_date=2025-01-01&end_date=2025-01-31` is called
- **THEN** the response SHALL return `{ success: true, data: { days: [...] } }`
- **THEN** each day item SHALL contain `{ date, quality: { holdQty, newHoldQty, releaseQty, futureHoldQty }, non_quality: { ... }, all: { ... } }`
- **THEN** all three hold_type variants SHALL be included in a single response
#### Scenario: Primary query endpoint
- **WHEN** `POST /api/hold-history/query` is called with `{ start_date, end_date, hold_type }`
- **THEN** the service SHALL execute a single Oracle query (or read from cache) via `hold_dataset_cache.execute_primary_query()`
- **THEN** the response SHALL return `{ success: true, data: { query_id, trend, reason_pareto, duration, list, summary } }`
- **THEN** list SHALL contain page 1 with default per_page of 50
#### Scenario: Supplementary view endpoint
- **WHEN** `GET /api/hold-history/view?query_id=...&hold_type=...&reason=...&page=...&per_page=...` is called
- **THEN** the service SHALL read the cached DataFrame and derive filtered views via `hold_dataset_cache.apply_view()`
- **THEN** no Oracle query SHALL be executed
- **THEN** the response SHALL return `{ success: true, data: { trend, reason_pareto, duration, list } }`
#### Scenario: Cache expired on view request
- **WHEN** GET /view is called with an expired query_id
- **THEN** the response SHALL return `{ success: false, error: "cache_expired" }` with HTTP 410
#### Scenario: Trend uses shift boundary at 07:30
- **WHEN** daily aggregation is calculated
- **THEN** transactions with time >= 07:30 SHALL be attributed to the next calendar day
- **THEN** transactions with time < 07:30 SHALL be attributed to the current calendar day
#### Scenario: Trend deduplicates same-day multiple holds
- **WHEN** a lot is held multiple times on the same day
- **THEN** only one hold event SHALL be counted for that day (using ROW_NUMBER per CONTAINERID per day)
#### Scenario: Trend deduplicates future holds
- **WHEN** the same lot has multiple future holds for the same reason
- **THEN** only the first occurrence SHALL be counted (using ROW_NUMBER per CONTAINERID per HOLDREASONID)
#### Scenario: Trend hold type classification
- **WHEN** trend data is aggregated by hold type
- **THEN** quality classification SHALL use the same NON_QUALITY_HOLD_REASONS set as existing hold endpoints
- **THEN** holds with HOLDREASONNAME NOT in NON_QUALITY_HOLD_REASONS SHALL be classified as quality
- **THEN** the "all" variant SHALL include both quality and non-quality holds
#### Scenario: Trend Redis cache for recent two months
- **WHEN** the requested date range falls within the current month or previous month
- **THEN** the service SHALL check Redis for cached data at key `hold_history:daily:{YYYY-MM}`
- **THEN** if cache exists, data SHALL be returned from Redis
- **THEN** if cache is missing, data SHALL be queried from Oracle and stored in Redis with 12-hour TTL
#### Scenario: Trend direct Oracle query for older data
- **WHEN** the requested date range includes months older than the previous month
- **THEN** the service SHALL query Oracle directly without caching
#### Scenario: Trend cross-month query assembly
- **WHEN** the requested date range spans multiple months (e.g., 2025-01-15 to 2025-02-15)
- **THEN** the service SHALL fetch each month's data independently (from cache or Oracle)
- **THEN** the service SHALL trim the combined result to the exact requested date range
- **THEN** the response SHALL contain only days within start_date and end_date inclusive
#### Scenario: Trend error
- **WHEN** the database query fails
- **THEN** the response SHALL return `{ success: false, error: '查詢失敗' }` with HTTP 500
### Requirement: Hold History API SHALL provide reason Pareto data
The API SHALL return hold reason distribution for Pareto analysis.
The reason Pareto data SHALL be derived from the cached dataset, not from a separate Oracle query.
#### Scenario: Reason Pareto endpoint
- **WHEN** `GET /api/hold-history/reason-pareto?start_date=2025-01-01&end_date=2025-01-31&hold_type=quality` is called
- **THEN** the response SHALL return `{ success: true, data: { items: [...] } }`
#### Scenario: Reason Pareto from cache
- **WHEN** reason Pareto is requested via GET /view with hold_type filter
- **THEN** the cached DataFrame SHALL be filtered by hold_type and grouped by HOLDREASONNAME
- **THEN** each item SHALL contain `{ reason, count, qty, pct, cumPct }`
- **THEN** items SHALL be sorted by count descending
- **THEN** pct SHALL be percentage of total hold events
- **THEN** cumPct SHALL be running cumulative percentage
#### Scenario: Reason Pareto uses shift boundary
- **WHEN** hold events are counted for Pareto
- **THEN** the 07:30 shift boundary rule SHALL be applied to HOLDTXNDATE
#### Scenario: Reason Pareto hold type filter
- **WHEN** hold_type is "quality"
- **THEN** only quality hold reasons SHALL be included
- **WHEN** hold_type is "non-quality"
- **THEN** only non-quality hold reasons SHALL be included
- **WHEN** hold_type is "all"
- **THEN** all hold reasons SHALL be included
### Requirement: Hold History API SHALL provide hold duration distribution
The API SHALL return hold duration distribution buckets.
The duration distribution SHALL be derived from the cached dataset.
#### Scenario: Duration endpoint
- **WHEN** `GET /api/hold-history/duration?start_date=2025-01-01&end_date=2025-01-31&hold_type=quality` is called
- **THEN** the response SHALL return `{ success: true, data: { items: [...] } }`
- **THEN** items SHALL contain 4 buckets: `{ range: "<4h", count, pct }`, `{ range: "4-24h", count, pct }`, `{ range: "1-3d", count, pct }`, `{ range: ">3d", count, pct }`
#### Scenario: Duration only includes released holds
- **WHEN** duration is calculated
- **THEN** only hold records with RELEASETXNDATE IS NOT NULL SHALL be included
- **THEN** duration SHALL be calculated as RELEASETXNDATE - HOLDTXNDATE
#### Scenario: Duration date range filter
- **WHEN** start_date and end_date are provided
- **THEN** only holds with HOLDTXNDATE within the date range (applying 07:30 shift boundary) SHALL be included
### Requirement: Hold History API SHALL provide department statistics
The API SHALL return hold/release statistics aggregated by department with optional person detail.
#### Scenario: Department endpoint
- **WHEN** `GET /api/hold-history/department?start_date=2025-01-01&end_date=2025-01-31&hold_type=quality` is called
- **THEN** the response SHALL return `{ success: true, data: { items: [...] } }`
- **THEN** each item SHALL contain `{ dept, holdCount, releaseCount, avgHoldHours, persons: [{ name, holdCount, releaseCount, avgHoldHours }] }`
- **THEN** items SHALL be sorted by holdCount descending
#### Scenario: Department with reason filter
- **WHEN** `GET /api/hold-history/department?start_date=2025-01-01&end_date=2025-01-31&hold_type=quality&reason=品質確認` is called
- **THEN** only hold records matching the specified reason SHALL be included in department and person statistics
#### Scenario: Department hold count vs release count
- **WHEN** department statistics are calculated
- **THEN** holdCount SHALL count records where HOLDEMPDEPTNAME equals the department AND HOLDTXNDATE is within the date range
- **THEN** releaseCount SHALL count records where RELEASEEMPDEPTNAME equals the department AND RELEASETXNDATE is within the date range
- **THEN** avgHoldHours SHALL be the average of (RELEASETXNDATE - HOLDTXNDATE) in hours for released holds initiated by that department
#### Scenario: Duration from cache
- **WHEN** duration is requested via GET /view
- **THEN** the cached DataFrame SHALL be filtered to released holds only
- **THEN** 4 buckets SHALL be computed: <4h, 4-24h, 1-3d, >3d
### Requirement: Hold History API SHALL provide paginated detail list
The API SHALL return a paginated list of individual hold/release records.
The detail list SHALL be paginated from the cached dataset.
#### Scenario: List endpoint
- **WHEN** `GET /api/hold-history/list?start_date=2025-01-01&end_date=2025-01-31&hold_type=quality&page=1&per_page=50` is called
- **THEN** the response SHALL return `{ success: true, data: { items: [...], pagination: { page, perPage, total, totalPages } } }`
- **THEN** each item SHALL contain: lotId, workorder, workcenter, holdReason, holdDate, holdEmp, holdComment, releaseDate, releaseEmp, releaseComment, holdHours, ncr
- **THEN** items SHALL be sorted by HOLDTXNDATE descending
#### Scenario: List pagination from cache
- **WHEN** list is requested via GET /view with page and per_page params
- **THEN** the cached DataFrame SHALL be filtered and paginated in-memory
- **THEN** response SHALL include items and pagination metadata
#### Scenario: List with reason filter
- **WHEN** `GET /api/hold-history/list?start_date=2025-01-01&end_date=2025-01-31&hold_type=quality&reason=品質確認` is called
- **THEN** only records matching the specified HOLDREASONNAME SHALL be returned
### Requirement: Hold History API SHALL keep department endpoint as separate query
The department endpoint SHALL remain as a separate Oracle query due to its unique person-level aggregation.
#### Scenario: List unreleased hold records
- **WHEN** a hold record has RELEASETXNDATE IS NULL
- **THEN** releaseDate SHALL be null
- **THEN** holdHours SHALL be calculated as (SYSDATE - HOLDTXNDATE) * 24
#### Scenario: List pagination bounds
- **WHEN** page is less than 1
- **THEN** page SHALL be treated as 1
- **WHEN** per_page exceeds 200
- **THEN** per_page SHALL be capped at 200
#### Scenario: List date range uses shift boundary
- **WHEN** records are filtered by date range
- **THEN** the 07:30 shift boundary rule SHALL be applied to HOLDTXNDATE
### Requirement: Hold History API SHALL use centralized SQL files
The API SHALL load SQL queries from files in the `src/mes_dashboard/sql/hold_history/` directory.
#### Scenario: SQL file organization
- **WHEN** the hold history service executes a query
- **THEN** the SQL SHALL be loaded from `sql/hold_history/<query_name>.sql`
- **THEN** the following SQL files SHALL exist: `trend.sql`, `reason_pareto.sql`, `duration.sql`, `department.sql`, `list.sql`
#### Scenario: SQL parameterization
- **WHEN** SQL queries are executed
- **THEN** all user-provided parameters (dates, hold_type, reason) SHALL be passed as bind parameters
- **THEN** no string interpolation SHALL be used for user input
### Requirement: Hold History API SHALL apply rate limiting
The API SHALL apply rate limiting to expensive endpoints.
#### Scenario: Rate limit on list endpoint
- **WHEN** the list endpoint receives excessive requests
- **THEN** rate limiting SHALL be applied using `configured_rate_limit` with a default of 90 requests per 60 seconds
#### Scenario: Rate limit on trend endpoint
- **WHEN** the trend endpoint receives excessive requests
- **THEN** rate limiting SHALL be applied using `configured_rate_limit` with a default of 60 requests per 60 seconds
### Requirement: Hold History page route SHALL serve static Vite HTML
The Flask route SHALL serve the pre-built Vite HTML file.
#### Scenario: Page route
- **WHEN** user navigates to `/hold-history`
- **THEN** Flask SHALL serve the pre-built HTML file from `static/dist/hold-history.html` via `send_from_directory`
- **THEN** the HTML SHALL NOT pass through Jinja2 template rendering
#### Scenario: Fallback HTML
- **WHEN** the pre-built HTML file does not exist
- **THEN** Flask SHALL return a minimal HTML page with the correct script tag and module import
#### Scenario: Department endpoint unchanged
- **WHEN** `GET /api/hold-history/department` is called
- **THEN** it SHALL continue to execute its own Oracle query
- **THEN** it SHALL NOT use the dataset cache

View File

@@ -1,172 +1,34 @@
## ADDED Requirements
## MODIFIED Requirements
### Requirement: Hold History page SHALL display a filter bar with date range and hold type
The page SHALL provide a filter bar for selecting date range and hold type classification.
The page SHALL provide a filter bar for selecting date range and hold type classification. On query, the page SHALL use a two-phase flow: POST /query returns queryId, subsequent filter changes use GET /view.
#### Scenario: Default date range
- **WHEN** the page loads
- **THEN** the date range SHALL default to the first and last day of the current month
#### Scenario: Primary query via POST /query
- **WHEN** user clicks the query button (or page loads with default filters)
- **THEN** the page SHALL call `POST /api/hold-history/query` with `{ start_date, end_date, hold_type }`
- **THEN** the response queryId SHALL be stored for subsequent view requests
- **THEN** trend, reason-pareto, duration, and list SHALL all be populated from the single response
#### Scenario: Hold Type radio default
- **WHEN** the page loads
- **THEN** the Hold Type filter SHALL default to "品質異常"
- **THEN** three radio options SHALL display: 品質異常, 非品質異常, 全部
#### Scenario: Hold type or reason filter change uses GET /view
- **WHEN** user changes hold_type radio or clicks a reason in the Pareto chart (while queryId exists)
- **THEN** the page SHALL call `GET /api/hold-history/view?query_id=...&hold_type=...&reason=...`
- **THEN** no new Oracle query SHALL be triggered
- **THEN** trend, reason-pareto, duration, and list SHALL update from the view response
#### Scenario: Filter bar change reloads all data
- **WHEN** user changes the date range or Hold Type selection
- **THEN** all API calls (trend, reason-pareto, duration, department, list) SHALL reload with the new parameters
- **THEN** any active Reason Pareto filter SHALL be cleared
- **THEN** pagination SHALL reset to page 1
#### Scenario: Pagination uses GET /view
- **WHEN** user navigates to a different page in the detail list
- **THEN** the page SHALL call `GET /api/hold-history/view?query_id=...&page=...&per_page=...`
### Requirement: Hold History page SHALL display summary KPI cards
The page SHALL show 6 summary KPI cards derived from the trend data for the selected period.
#### Scenario: Date range change triggers new primary query
- **WHEN** user changes the date range and clicks query
- **THEN** the page SHALL call `POST /api/hold-history/query` with new dates
- **THEN** a new queryId SHALL replace the old one
#### Scenario: Summary cards rendering
- **WHEN** trend data is loaded
- **THEN** six cards SHALL display: Release 數量, New Hold 數量, Future Hold 數量, 淨變動, 期末 On Hold, 平均 Hold 時長
- **THEN** Release SHALL be displayed as a positive indicator (green)
- **THEN** New Hold and Future Hold SHALL be displayed as negative indicators (red/orange)
- **THEN** 淨變動 SHALL equal Release - New Hold - Future Hold
- **THEN** 期末 On Hold SHALL be the HOLDQTY of the last day in the selected range
- **THEN** number values SHALL use zh-TW number formatting
#### Scenario: Cache expired auto-retry
- **WHEN** GET /view returns `{ success: false, error: "cache_expired" }`
- **THEN** the page SHALL automatically re-execute `POST /api/hold-history/query` with the last committed filters
- **THEN** the view SHALL refresh with the new data
#### Scenario: Summary reflects filter bar only
- **WHEN** user clicks a Reason Pareto block
- **THEN** summary cards SHALL NOT change (they only respond to filter bar changes)
### Requirement: Hold History page SHALL display a Daily Trend chart
The page SHALL display a mixed line+bar chart showing daily hold stock and flow.
#### Scenario: Daily Trend chart rendering
- **WHEN** trend data is loaded
- **THEN** an ECharts mixed chart SHALL display with dual Y-axes
- **THEN** the left Y-axis SHALL show flow quantities (Release, New Hold, Future Hold)
- **THEN** the right Y-axis SHALL show HOLDQTY stock level
- **THEN** the X-axis SHALL show dates within the selected range
#### Scenario: Bar direction encoding
- **WHEN** daily trend bars are rendered
- **THEN** Release bars SHALL extend upward (positive direction, green color)
- **THEN** New Hold bars SHALL extend downward (negative direction, red color)
- **THEN** Future Hold bars SHALL extend downward (negative direction, orange color, stacked with New Hold)
- **THEN** HOLDQTY SHALL display as a line on the right Y-axis
#### Scenario: Hold Type switching without re-call
- **WHEN** user changes the Hold Type radio on the filter bar
- **THEN** if the date range has not changed, the trend chart SHALL update from locally cached data
- **THEN** no additional API call SHALL be made for the trend endpoint
#### Scenario: Daily Trend reflects filter bar only
- **WHEN** user clicks a Reason Pareto block
- **THEN** the Daily Trend chart SHALL NOT change (it only responds to filter bar changes)
### Requirement: Hold History page SHALL display a Reason Pareto chart
The page SHALL display a Pareto chart showing hold reason distribution.
#### Scenario: Reason Pareto rendering
- **WHEN** reason-pareto data is loaded
- **THEN** a Pareto chart SHALL display with bars (count per reason) and a cumulative percentage line
- **THEN** reasons SHALL be sorted by count descending
- **THEN** the cumulative line SHALL reach 100% at the rightmost bar
#### Scenario: Reason Pareto click filters downstream
- **WHEN** user clicks a reason bar in the Pareto chart
- **THEN** `reasonFilter` SHALL be set to the clicked reason name
- **THEN** Department table SHALL reload filtered by that reason
- **THEN** Detail table SHALL reload filtered by that reason
- **THEN** the clicked bar SHALL show a visual highlight
#### Scenario: Reason Pareto click toggle
- **WHEN** user clicks the same reason bar that is already active
- **THEN** `reasonFilter` SHALL be cleared
- **THEN** Department table and Detail table SHALL reload without reason filter
#### Scenario: Reason Pareto reflects filter bar only
- **WHEN** user clicks a reason bar
- **THEN** Summary KPIs, Daily Trend, and Duration chart SHALL NOT change
### Requirement: Hold History page SHALL display Hold Duration distribution
The page SHALL display a horizontal bar chart showing hold duration distribution.
#### Scenario: Duration chart rendering
- **WHEN** duration data is loaded
- **THEN** a horizontal bar chart SHALL display with 4 buckets: <4h, 4-24h, 1-3天, >3天
- **THEN** each bar SHALL show count and percentage
- **THEN** only released holds (RELEASETXNDATE IS NOT NULL) SHALL be included
#### Scenario: Duration reflects filter bar only
- **WHEN** user clicks a Reason Pareto block
- **THEN** the Duration chart SHALL NOT change (it only responds to filter bar changes)
### Requirement: Hold History page SHALL display Department statistics with expandable rows
The page SHALL display a table showing hold/release statistics per department, expandable to show individual persons.
#### Scenario: Department table rendering
- **WHEN** department data is loaded
- **THEN** a table SHALL display with columns: 部門, Hold 次數, Release 次數, 平均 Hold 時長(hr)
- **THEN** departments SHALL be sorted by Hold 次數 descending
- **THEN** each department row SHALL have an expand toggle
#### Scenario: Department row expansion
- **WHEN** user clicks the expand toggle on a department row
- **THEN** individual person rows SHALL display below the department row
- **THEN** person rows SHALL show: 人員名稱, Hold 次數, Release 次數, 平均 Hold 時長(hr)
#### Scenario: Department table responds to reason filter
- **WHEN** a Reason Pareto filter is active
- **THEN** department data SHALL reload filtered by the selected reason
- **THEN** only holds matching the reason SHALL be included in statistics
### Requirement: Hold History page SHALL display paginated Hold/Release detail list
The page SHALL display a detailed list of individual hold/release records with server-side pagination.
#### Scenario: Detail table columns
- **WHEN** detail data is loaded
- **THEN** a table SHALL display with columns: Lot ID, WorkOrder, 站別, Hold Reason, Hold 時間, Hold 人員, Hold Comment, Release 時間, Release 人員, Release Comment, 時長(hr), NCR
#### Scenario: Unreleased hold display
- **WHEN** a hold record has RELEASETXNDATE IS NULL
- **THEN** the Release 時間 column SHALL display "仍在 Hold"
- **THEN** the 時長 column SHALL display the duration from HOLDTXNDATE to current time
#### Scenario: Detail table pagination
- **WHEN** total records exceed per_page (50)
- **THEN** Prev/Next buttons and page info SHALL display
- **THEN** page info SHALL show "顯示 {start} - {end} / {total}"
#### Scenario: Detail table responds to reason filter
- **WHEN** a Reason Pareto filter is active
- **THEN** detail data SHALL reload filtered by the selected reason
- **THEN** pagination SHALL reset to page 1
#### Scenario: Filter changes reset pagination
- **WHEN** any filter changes (filter bar or Reason Pareto click)
- **THEN** pagination SHALL reset to page 1
### Requirement: Hold History page SHALL display active filter indicator
The page SHALL show a clear indicator when a Reason Pareto filter is active.
#### Scenario: Reason filter indicator
- **WHEN** a reason filter is active
- **THEN** a filter indicator SHALL display above the Department table section
- **THEN** the indicator SHALL show the active reason name
- **THEN** a clear button (✕) SHALL remove the reason filter
### Requirement: Hold History page SHALL handle loading and error states
The page SHALL display appropriate feedback during API calls and on errors.
#### Scenario: Initial loading overlay
- **WHEN** the page first loads
- **THEN** a full-page loading overlay SHALL display until all data is loaded
#### Scenario: API error handling
- **WHEN** an API call fails
- **THEN** an error banner SHALL display with the error message
- **THEN** the page SHALL NOT crash or become unresponsive
### Requirement: Hold History page SHALL have navigation links
The page SHALL provide navigation to related pages.
#### Scenario: Back to Hold Overview
- **WHEN** user clicks the "← Hold Overview" button in the header
- **THEN** the page SHALL navigate to `/hold-overview`
#### Scenario: Department still uses separate API
- **WHEN** department data needs to load or reload
- **THEN** the page SHALL call `GET /api/hold-history/department` separately

View File

@@ -0,0 +1,71 @@
## ADDED Requirements
### Requirement: Resource dataset cache SHALL execute a single Oracle query and cache the result
The resource_dataset_cache module SHALL query Oracle once for the full shift-status fact set and cache it for subsequent derivations.
#### Scenario: Primary query execution and caching
- **WHEN** `execute_primary_query()` is called with date range, granularity, and resource filter parameters
- **THEN** a deterministic `query_id` SHALL be computed from all primary params using SHA256
- **THEN** if a cached DataFrame exists for this query_id (L1 or L2), it SHALL be used without querying Oracle
- **THEN** if no cache exists, a single Oracle query SHALL fetch all shift-status records from `DW_MES_RESOURCESTATUS_SHIFT` for the filtered resources and date range
- **THEN** the result DataFrame SHALL be stored in both L1 (ProcessLevelCache) and L2 (Redis as parquet/base64)
- **THEN** the response SHALL include `query_id`, summary (KPI, trend, heatmap, comparison), and detail page 1
#### Scenario: Cache TTL and eviction
- **WHEN** a DataFrame is cached
- **THEN** the cache TTL SHALL be 900 seconds (15 minutes)
- **THEN** L1 cache max_size SHALL be 8 entries with LRU eviction
- **THEN** the Redis namespace SHALL be `resource_dataset`
### Requirement: Resource dataset cache SHALL derive KPI summary from cached DataFrame
The module SHALL compute aggregated KPI metrics from the cached fact set.
#### Scenario: KPI derivation from cache
- **WHEN** summary view is derived from cached DataFrame
- **THEN** total hours for PRD, SBY, UDT, SDT, EGT, NST SHALL be summed
- **THEN** OU% and AVAIL% SHALL be computed from the hour totals
- **THEN** machine count SHALL be the distinct count of HISTORYID in the cached data
### Requirement: Resource dataset cache SHALL derive trend data from cached DataFrame
The module SHALL compute time-series aggregations from the cached fact set.
#### Scenario: Trend derivation
- **WHEN** summary view is derived with a given granularity (day/week/month/year)
- **THEN** the cached DataFrame SHALL be grouped by the granularity period
- **THEN** each period SHALL include PRD, SBY, UDT, SDT, EGT, NST hours and computed OU%, AVAIL%
### Requirement: Resource dataset cache SHALL derive heatmap from cached DataFrame
The module SHALL compute workcenter x date OU% matrix from the cached fact set.
#### Scenario: Heatmap derivation
- **WHEN** summary view is derived
- **THEN** the cached DataFrame SHALL be grouped by (workcenter, date)
- **THEN** each cell SHALL contain the OU% for that workcenter on that date
- **THEN** workcenters SHALL be sorted by workcenter_seq
### Requirement: Resource dataset cache SHALL derive workcenter comparison from cached DataFrame
The module SHALL compute per-workcenter aggregated metrics from the cached fact set.
#### Scenario: Comparison derivation
- **WHEN** summary view is derived
- **THEN** the cached DataFrame SHALL be grouped by workcenter
- **THEN** each workcenter SHALL include total hours and computed OU%
- **THEN** results SHALL be sorted by OU% descending, limited to top 15
### Requirement: Resource dataset cache SHALL derive paginated detail from cached DataFrame
The module SHALL provide hierarchical detail records from the cached fact set.
#### Scenario: Detail derivation and pagination
- **WHEN** detail view is requested with page and per_page parameters
- **THEN** the cached DataFrame SHALL be used to compute per-resource metrics
- **THEN** resource dimension data (WORKCENTERNAME, RESOURCEFAMILYNAME) SHALL be merged from resource_cache
- **THEN** results SHALL be structured as a hierarchical tree (workcenter -> family -> resource)
- **THEN** pagination SHALL apply to the flattened list
### Requirement: Resource dataset cache SHALL handle cache expiry gracefully
The module SHALL return appropriate signals when cache has expired.
#### Scenario: Cache expired during view request
- **WHEN** a view is requested with a query_id whose cache has expired
- **THEN** the response SHALL return `{ success: false, error: "cache_expired" }`
- **THEN** the HTTP status SHALL be 410 (Gone)

View File

@@ -1,139 +1,46 @@
## ADDED Requirements
### Requirement: Resource History page SHALL display KPI summary cards
The page SHALL show 9 KPI cards with aggregated performance metrics for the queried period.
#### Scenario: KPI cards rendering
- **WHEN** summary data is loaded from `GET /api/resource/history/summary`
- **THEN** 9 cards SHALL display: OU%, AVAIL%, PRD, SBY, UDT, SDT, EGT, NST, Machine Count
- **THEN** hour values SHALL format with "K" suffix for large numbers (e.g., 2.5K)
- **THEN** percentage values SHALL use `buildResourceKpiFromHours()` from `core/compute.js`
### Requirement: Resource History page SHALL display trend chart
The page SHALL show OU% and Availability% trends over time.
#### Scenario: Trend chart rendering
- **WHEN** summary data is loaded
- **THEN** a line chart with area fill SHALL display OU% and AVAIL% time series
- **THEN** the chart SHALL use vue-echarts with `autoresize` prop
- **THEN** smooth curves with 0.2 opacity area style SHALL render
### Requirement: Resource History page SHALL display stacked status distribution chart
The page SHALL show E10 status hour distribution over time.
#### Scenario: Stacked bar chart rendering
- **WHEN** summary data is loaded
- **THEN** a stacked bar chart SHALL display PRD, SBY, UDT, SDT, EGT, NST hours per period
- **THEN** each status SHALL use its designated color (PRD=green, SBY=blue, UDT=red, SDT=yellow, EGT=purple, NST=gray)
- **THEN** tooltips SHALL show percentages calculated dynamically
### Requirement: Resource History page SHALL display workcenter comparison chart
The page SHALL show top workcenters ranked by OU%.
#### Scenario: Comparison chart rendering
- **WHEN** summary data is loaded
- **THEN** a horizontal bar chart SHALL display top 15 workcenters by OU%
- **THEN** bars SHALL be color-coded: green (≥80%), yellow (≥50%), red (<50%)
- **THEN** data SHALL display in descending OU% order (top to bottom)
### Requirement: Resource History page SHALL display OU% heatmap
The page SHALL show a heatmap of OU% by workcenter and date.
#### Scenario: Heatmap chart rendering
- **WHEN** summary data is loaded
- **THEN** a 2D heatmap SHALL display: workcenters (Y-axis) × dates (X-axis)
- **THEN** color scale SHALL range from red (low OU%) through yellow to green (high OU%)
- **THEN** workcenters SHALL sort by `workcenter_seq` for consistent ordering
### Requirement: Resource History page SHALL display hierarchical detail table
The page SHALL show a three-level expandable table with per-resource performance metrics.
#### Scenario: Detail table rendering
- **WHEN** detail data is loaded from `GET /api/resource/history/detail`
- **THEN** a tree table SHALL display with columns: Name, OU%, AVAIL%, PRD, SBY, UDT, SDT, EGT, NST, Count
- **THEN** Level 0 rows SHALL show workcenter groups with aggregated metrics
- **THEN** Level 1 rows SHALL show resource families with aggregated metrics
- **THEN** Level 2 rows SHALL show individual resources
#### Scenario: Hour and percentage display
- **WHEN** detail data renders
- **THEN** status columns SHALL display hours with percentage: "10.5h (25%)"
- **THEN** KPI values SHALL be computed using `buildResourceKpiFromHours()` from `core/compute.js`
#### Scenario: Tree expand and collapse
- **WHEN** user clicks the expand button on a row
- **THEN** child rows SHALL toggle visibility
- **WHEN** user clicks "Expand All" or "Collapse All"
- **THEN** all rows SHALL expand or collapse accordingly
## MODIFIED Requirements
### Requirement: Resource History page SHALL support date range and granularity selection
The page SHALL allow users to specify time range and aggregation granularity.
The page SHALL allow users to specify time range and aggregation granularity. On query, the page SHALL use a two-phase flow: POST /query returns queryId, subsequent filter changes use GET /view.
#### Scenario: Date range selection
- **WHEN** the page loads
- **THEN** date inputs SHALL default to last 7 days (yesterday minus 6 days)
- **THEN** date range SHALL NOT exceed 730 days (2 years)
#### Scenario: Granularity buttons
- **WHEN** user clicks a granularity button (///)
- **THEN** the active button SHALL highlight
- **THEN** the next query SHALL use the selected granularity (day/week/month/year)
#### Scenario: Query execution
#### Scenario: Primary query via POST /query
- **WHEN** user clicks the query button
- **THEN** summary and detail APIs SHALL be called in parallel
- **THEN** all 4 charts, KPI cards, and detail table SHALL update with results
- **THEN** the page SHALL call `POST /api/resource/history/query` with date range, granularity, and resource filters
- **THEN** the response queryId SHALL be stored for subsequent view requests
- **THEN** summary (KPI, trend, heatmap, comparison) and detail page 1 SHALL all be populated from the single response
### Requirement: Resource History page SHALL support multi-select filtering
The page SHALL provide multi-select dropdown filters for workcenter groups and families, and SHALL support interdependent narrowing with machine options and selected-value pruning.
#### Scenario: Filter change uses GET /view
- **WHEN** user changes supplementary filters (workcenter groups, families, machines, equipment type) while queryId exists
- **THEN** the page SHALL call `GET /api/resource/history/view?query_id=...&filters...`
- **THEN** no new Oracle query SHALL be triggered
- **THEN** all charts, KPI cards, and detail table SHALL update from the view response
#### Scenario: Multi-select dropdown
- **WHEN** user clicks a multi-select dropdown trigger
- **THEN** a dropdown SHALL display with checkboxes for each option
- **THEN** "Select All" and "Clear All" buttons SHALL be available
- **THEN** clicking outside the dropdown SHALL close it
#### Scenario: Pagination uses GET /view
- **WHEN** user navigates to a different page in the detail table
- **THEN** the page SHALL call `GET /api/resource/history/view?query_id=...&page=...`
#### Scenario: Filter options loading
- **WHEN** the page loads
- **THEN** workcenter groups and families SHALL load from `GET /api/resource/history/options`
- **THEN** machine candidates SHALL be derivable before first query from loaded option resources
#### Scenario: Upstream filters narrow downstream options
- **WHEN** user changes upstream filters (`workcenterGroups`, `families`, equipment-type flags)
- **THEN** machine options SHALL be recomputed to only include matching resources
- **THEN** narrowed options SHALL be reflected immediately in filter controls
#### Scenario: Invalid selected machines are pruned
- **WHEN** upstream filters change and selected machines are no longer valid
- **THEN** invalid selected machine values SHALL be removed automatically
- **THEN** remaining valid selected machine values SHALL be preserved
#### Scenario: Equipment type checkboxes
- **WHEN** user toggles a checkbox (生產設備, 重點設備, 監控設備)
- **THEN** the next query SHALL include the corresponding filter parameter
- **THEN** option narrowing SHALL also honor the same checkbox conditions
#### Scenario: Date range or granularity change triggers new primary query
- **WHEN** user changes date range or granularity and clicks query
- **THEN** the page SHALL call `POST /api/resource/history/query` with new params
- **THEN** a new queryId SHALL replace the old one
### Requirement: Resource History page SHALL support CSV export
The page SHALL allow users to export the current query results as CSV.
#### Scenario: Cache expired auto-retry
- **WHEN** GET /view returns `{ success: false, error: "cache_expired" }`
- **THEN** the page SHALL automatically re-execute `POST /api/resource/history/query` with the last committed filters
- **THEN** the view SHALL refresh with the new data
#### Scenario: CSV export
- **WHEN** user clicks the "匯出 CSV" button
- **THEN** the browser SHALL download a CSV file from `GET /api/resource/history/export` with current filters
- **THEN** the filename SHALL be `resource_history_{start_date}_to_{end_date}.csv`
### Requirement: Resource History page SHALL display KPI summary cards
The page SHALL show 9 KPI cards with aggregated performance metrics derived from the cached dataset.
### Requirement: Resource History page SHALL handle loading and error states
The page SHALL display appropriate feedback during API calls and on errors.
#### Scenario: KPI cards from cached data
- **WHEN** summary data is derived from the cached DataFrame
- **THEN** 9 cards SHALL display: OU%, AVAIL%, PRD, SBY, UDT, SDT, EGT, NST, Machine Count
- **THEN** values SHALL be computed from the cached shift-status records, not from a separate Oracle query
#### Scenario: Query loading state
- **WHEN** a query is executing
- **THEN** the query button SHALL be disabled
- **THEN** a loading indicator SHALL display
### Requirement: Resource History page SHALL display hierarchical detail table
The page SHALL show a three-level expandable table derived from the cached dataset.
#### Scenario: API error handling
- **WHEN** an API call fails
- **THEN** a toast notification SHALL display the error message
- **THEN** the page SHALL NOT crash or become unresponsive
#### Scenario: No data placeholder
- **WHEN** query returns empty results
- **THEN** charts and table SHALL display "No data" placeholders
#### Scenario: Detail table from cached data
- **WHEN** detail data is derived from the cached DataFrame
- **THEN** a tree table SHALL display with the same columns and hierarchy as before
- **THEN** data SHALL be derived in-memory from the cached DataFrame, not from a separate Oracle query