feat: dataset cache for hold/resource history + slow connection migration
Two changes combined: 1. historical-query-slow-connection: Migrate all historical query pages to read_sql_df_slow with semaphore concurrency control (max 3), raise DB slow timeout to 300s, gunicorn timeout to 360s, and unify frontend timeouts to 360s for all historical pages. 2. hold-resource-history-dataset-cache: Convert hold-history and resource-history from multi-query to single-query + dataset cache pattern (L1 ProcessLevelCache + L2 Redis parquet/base64, TTL=900s). Replace old GET endpoints with POST /query + GET /view two-phase API. Frontend auto-retries on 410 cache_expired. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,2 @@
|
||||
schema: spec-driven
|
||||
created: 2026-02-25
|
||||
@@ -0,0 +1,84 @@
|
||||
## Context
|
||||
|
||||
Hold-history fires 4 independent Oracle queries per user interaction (trend, reason-pareto, duration, list), all against `DW_MES_HOLDRELEASEHISTORY` with the same date range + hold_type filter. Resource-history fires 4 Oracle queries (3 parallel in summary + 1 detail), all against `DW_MES_RESOURCESTATUS_SHIFT` with the same filter set. Both pages re-query Oracle on every filter change or pagination.
|
||||
|
||||
Reject-history already solved this problem with a two-phase dataset cache pattern (`reject_dataset_cache.py`): one Oracle query caches the full fact set, subsequent views are derived from cache via pandas. This change applies the same pattern to hold-history and resource-history.
|
||||
|
||||
## Goals / Non-Goals
|
||||
|
||||
**Goals:**
|
||||
- Reduce Oracle queries from 4 per interaction to 1 per user session (per filter combination)
|
||||
- Same cache infrastructure as reject-history: L1 (ProcessLevelCache) + L2 (Redis parquet/base64), 15-minute TTL
|
||||
- Same API pattern: POST /query (primary) + GET /view (supplementary, from cache)
|
||||
- Maintain all existing UI functionality — same charts, tables, filters, pagination
|
||||
- Frontend adopts queryId-based two-phase flow
|
||||
|
||||
**Non-Goals:**
|
||||
- Changing the SQL queries themselves (same table, same WHERE logic)
|
||||
- Adding new visualizations or metrics
|
||||
- Modifying other pages (reject-history, query-tool, etc.)
|
||||
- Changing the department endpoint on hold-history (it has unique person-level expansion logic that benefits from its own query — we keep it as a separate call)
|
||||
|
||||
## Decisions
|
||||
|
||||
### D1: Follow reject_dataset_cache.py architecture exactly
|
||||
|
||||
**Decision**: Create `hold_dataset_cache.py` and `resource_dataset_cache.py` following the same module structure:
|
||||
- `_make_query_id()` — SHA256 hash of primary params
|
||||
- `_redis_store_df()` / `_redis_load_df()` — parquet/base64 encoding
|
||||
- `_get_cached_df()` / `_store_df()` — L1 → L2 read-through
|
||||
- `execute_primary_query()` — Oracle query + cache + derive initial view
|
||||
- `apply_view()` — read cache + filter + re-derive
|
||||
|
||||
**Rationale**: Proven pattern, consistent codebase, shared infrastructure. Alternatives (custom cache format, separate cache layers) add complexity for no benefit.
|
||||
|
||||
### D2: Hold-history primary query scope
|
||||
|
||||
**Decision**: The primary query fetches ALL hold/release records for the date range (all hold_types). Trend, reason-pareto, duration, and list are all derived from this single cached DataFrame. Department remains a separate API call.
|
||||
|
||||
**Rationale**: Trend data already contains all 3 hold_type variants in one query. By caching the raw facts (not pre-aggregated), we can switch hold_type views instantly from cache. Department has unique person-level JOINs and GROUP BY logic that doesn't fit the "filter from flat DataFrame" pattern cleanly.
|
||||
|
||||
**Alternatives considered**:
|
||||
- Cache per hold_type: wastes 3x cache memory, still requires Oracle for type switching
|
||||
- Include department in cache: complex person-level aggregation doesn't map well to flat DataFrame filtering
|
||||
|
||||
### D3: Resource-history primary query scope
|
||||
|
||||
**Decision**: The primary query fetches ALL shift-status records for the date range and resource filter combination. KPI, trend, heatmap, workcenter comparison, and detail are all derived from this single cached DataFrame.
|
||||
|
||||
**Rationale**: All 4 current queries (kpi, trend, heatmap, detail) use the same base WHERE clause against the same table. The aggregations (GROUP BY date for trend, GROUP BY workcenter×date for heatmap, etc.) are simple pandas operations on the cached raw data.
|
||||
|
||||
### D4: Cache TTL = 15 minutes, same as reject-history
|
||||
|
||||
**Decision**: Use `_CACHE_TTL = 900` (15 min) for both modules, with L1 `max_size = 8`.
|
||||
|
||||
**Rationale**: Matches reject-history. 15 minutes covers typical analysis sessions. Users who need fresh data can re-query (which replaces the cache). Hold-history's existing 12h Redis cache for trend data is more aggressive but stale — 15 minutes is a better balance.
|
||||
|
||||
### D5: API contract — POST /query + GET /view
|
||||
|
||||
**Decision**: Both pages switch to:
|
||||
- `POST /api/hold-history/query` → primary query, returns `query_id` + initial view (trend, reason, duration, list page 1)
|
||||
- `GET /api/hold-history/view` → supplementary filter/pagination from cache
|
||||
- `POST /api/resource/history/query` → primary query, returns `query_id` + initial view (summary + detail page 1)
|
||||
- `GET /api/resource/history/view` → supplementary filter/pagination from cache
|
||||
|
||||
Old GET endpoints (trend, reason-pareto, duration, list, summary, detail) are removed.
|
||||
|
||||
**Rationale**: Same pattern as reject-history. POST for primary (sends filter params in body), GET for view (sends query_id + supplementary params).
|
||||
|
||||
### D6: Frontend queryId-based flow
|
||||
|
||||
**Decision**: Both `App.vue` files adopt the two-phase pattern:
|
||||
1. User clicks "查詢" → `POST /query` → store `queryId`
|
||||
2. Filter change / pagination → `GET /view?query_id=...&filters...` (no Oracle)
|
||||
3. Cache expired (HTTP 410) → auto re-execute primary query
|
||||
|
||||
**Rationale**: Proven pattern from reject-history. Keeps UI responsive after initial query.
|
||||
|
||||
## Risks / Trade-offs
|
||||
|
||||
- **[Memory]** Caching full DataFrames in L1 (per-worker) uses more RAM than current approach → Mitigated by `max_size=8` LRU eviction (same as reject-history, works well in production)
|
||||
- **[Staleness]** 15-min TTL means data could be up to 15 minutes old during an analysis session → Acceptable for historical analysis; user can re-query for fresh data
|
||||
- **[Department endpoint]** Hold-history department still makes a separate Oracle call → Acceptable trade-off; person-level aggregation doesn't fit flat DataFrame model. Could be addressed later.
|
||||
- **[Breaking API]** Old GET endpoints removed → No external consumers; frontend is the only client
|
||||
- **[Redis dependency]** If Redis is down, only L1 cache works (per-worker, not cross-worker) → Same behavior as reject-history; L1 still provides 15-min cache per worker
|
||||
@@ -0,0 +1,31 @@
|
||||
## Why
|
||||
|
||||
Hold-history and resource-history pages currently fire 4 separate Oracle queries per user interaction (filter change, pagination, refresh), all hitting the same base table with identical filter parameters. This wastes Oracle connections and creates unnecessary latency — especially now that these pages use `read_sql_df_slow` (dedicated connections with 300s timeout). The reject-history page already solves this with a "single query + cache derivation" pattern that reduces Oracle load by ~75%. Hold-history and resource-history should adopt the same architecture.
|
||||
|
||||
## What Changes
|
||||
|
||||
- **New `hold_dataset_cache.py`**: Two-phase cache module for hold-history. Single Oracle query caches the full hold/release fact set; subsequent views (trend, reason pareto, duration distribution, paginated list) are derived from cache using pandas.
|
||||
- **New `resource_dataset_cache.py`**: Two-phase cache module for resource-history. Single Oracle query caches the full shift-status fact set; subsequent views (KPI, trend, heatmap, workcenter comparison, paginated detail) are derived from cache using pandas.
|
||||
- **Hold-history route rewrite**: Replace 4 independent GET endpoints with POST /query (primary) + GET /view (supplementary) pattern.
|
||||
- **Resource-history route rewrite**: Replace GET /summary (3 parallel queries) + GET /detail (1 query) with POST /query + GET /view pattern.
|
||||
- **Frontend two-phase flow**: Both pages adopt queryId-based flow — primary query returns queryId + initial view; filter/pagination changes call GET /view with queryId (no Oracle).
|
||||
- **Cache infrastructure**: L1 (ProcessLevelCache, in-process) + L2 (Redis, parquet/base64), 15-minute TTL, deterministic query ID from SHA256 of primary params. Same architecture as `reject_dataset_cache.py`.
|
||||
|
||||
## Capabilities
|
||||
|
||||
### New Capabilities
|
||||
- `hold-dataset-cache`: Two-phase dataset cache for hold-history (single Oracle query + in-memory derivation for trend, reason pareto, duration, paginated list)
|
||||
- `resource-dataset-cache`: Two-phase dataset cache for resource-history (single Oracle query + in-memory derivation for KPI, trend, heatmap, comparison, paginated detail)
|
||||
|
||||
### Modified Capabilities
|
||||
- `hold-history-api`: Route endpoints change from 4 independent GETs to POST /query + GET /view
|
||||
- `hold-history-page`: Frontend adopts two-phase queryId flow
|
||||
- `resource-history-page`: Frontend adopts two-phase queryId flow; route endpoints consolidated
|
||||
|
||||
## Impact
|
||||
|
||||
- **Backend**: New files `hold_dataset_cache.py`, `resource_dataset_cache.py`; modified routes for both pages; service functions remain but are called only once per primary query
|
||||
- **Frontend**: `hold-history/App.vue` and `resource-history/App.vue` rewritten for two-phase flow
|
||||
- **Oracle load**: ~75% reduction per page (4 queries → 1 per user session, subsequent interactions from cache)
|
||||
- **Redis**: Additional cache entries (~2 namespaces, same TTL/encoding as reject_dataset)
|
||||
- **API contract**: Endpoint signatures change (breaking for these 2 pages, but no external consumers)
|
||||
@@ -0,0 +1,64 @@
|
||||
## ADDED Requirements
|
||||
|
||||
### Requirement: Hold dataset cache SHALL execute a single Oracle query and cache the result
|
||||
The hold_dataset_cache module SHALL query Oracle once for the full hold/release fact set and cache it for subsequent derivations.
|
||||
|
||||
#### Scenario: Primary query execution and caching
|
||||
- **WHEN** `execute_primary_query()` is called with date range and hold_type parameters
|
||||
- **THEN** a deterministic `query_id` SHALL be computed from the primary params (start_date, end_date) using SHA256
|
||||
- **THEN** if a cached DataFrame exists for this query_id (L1 or L2), it SHALL be used without querying Oracle
|
||||
- **THEN** if no cache exists, a single Oracle query SHALL fetch all hold/release records from `DW_MES_HOLDRELEASEHISTORY` for the date range (all hold_types)
|
||||
- **THEN** the result DataFrame SHALL be stored in both L1 (ProcessLevelCache) and L2 (Redis as parquet/base64)
|
||||
- **THEN** the response SHALL include `query_id`, trend, reason_pareto, duration, and list page 1
|
||||
|
||||
#### Scenario: Cache TTL and eviction
|
||||
- **WHEN** a DataFrame is cached
|
||||
- **THEN** the cache TTL SHALL be 900 seconds (15 minutes)
|
||||
- **THEN** L1 cache max_size SHALL be 8 entries with LRU eviction
|
||||
- **THEN** the Redis namespace SHALL be `hold_dataset`
|
||||
|
||||
### Requirement: Hold dataset cache SHALL derive trend data from cached DataFrame
|
||||
The module SHALL compute daily trend aggregations from the cached fact set.
|
||||
|
||||
#### Scenario: Trend derivation from cache
|
||||
- **WHEN** `apply_view()` is called with a valid query_id
|
||||
- **THEN** trend data SHALL be derived by grouping the cached DataFrame by date
|
||||
- **THEN** the 07:30 shift boundary rule SHALL be applied
|
||||
- **THEN** all three hold_type variants (quality, non_quality, all) SHALL be computed from the same DataFrame
|
||||
- **THEN** hold_type filtering SHALL be applied in-memory without re-querying Oracle
|
||||
|
||||
### Requirement: Hold dataset cache SHALL derive reason Pareto from cached DataFrame
|
||||
The module SHALL compute reason distribution from the cached fact set.
|
||||
|
||||
#### Scenario: Reason Pareto derivation
|
||||
- **WHEN** `apply_view()` is called with hold_type filter
|
||||
- **THEN** reason Pareto SHALL be derived by grouping the filtered DataFrame by HOLDREASONNAME
|
||||
- **THEN** items SHALL include count, qty, pct, and cumPct
|
||||
- **THEN** items SHALL be sorted by count descending
|
||||
|
||||
### Requirement: Hold dataset cache SHALL derive duration distribution from cached DataFrame
|
||||
The module SHALL compute hold duration buckets from the cached fact set.
|
||||
|
||||
#### Scenario: Duration derivation
|
||||
- **WHEN** `apply_view()` is called with hold_type filter
|
||||
- **THEN** duration distribution SHALL be derived from records where RELEASETXNDATE IS NOT NULL
|
||||
- **THEN** 4 buckets SHALL be computed: <4h, 4-24h, 1-3d, >3d
|
||||
- **THEN** each bucket SHALL include count and pct
|
||||
|
||||
### Requirement: Hold dataset cache SHALL derive paginated list from cached DataFrame
|
||||
The module SHALL provide paginated detail records from the cached fact set.
|
||||
|
||||
#### Scenario: List pagination from cache
|
||||
- **WHEN** `apply_view()` is called with page and per_page parameters
|
||||
- **THEN** the cached DataFrame SHALL be filtered by hold_type and optional reason filter
|
||||
- **THEN** records SHALL be sorted by HOLDTXNDATE descending
|
||||
- **THEN** pagination SHALL be applied in-memory (offset + limit on the sorted DataFrame)
|
||||
- **THEN** response SHALL include items and pagination metadata (page, perPage, total, totalPages)
|
||||
|
||||
### Requirement: Hold dataset cache SHALL handle cache expiry gracefully
|
||||
The module SHALL return appropriate signals when cache has expired.
|
||||
|
||||
#### Scenario: Cache expired during view request
|
||||
- **WHEN** `apply_view()` is called with a query_id whose cache has expired
|
||||
- **THEN** the response SHALL return `{ success: false, error: "cache_expired" }`
|
||||
- **THEN** the HTTP status SHALL be 410 (Gone)
|
||||
@@ -0,0 +1,62 @@
|
||||
## MODIFIED Requirements
|
||||
|
||||
### Requirement: Hold History API SHALL provide daily trend data with Redis caching
|
||||
The Hold History API SHALL return trend, reason-pareto, duration, and list data from a single cached dataset via a two-phase query pattern (POST /query + GET /view). The old independent GET endpoints for trend, reason-pareto, duration, and list SHALL be replaced.
|
||||
|
||||
#### Scenario: Primary query endpoint
|
||||
- **WHEN** `POST /api/hold-history/query` is called with `{ start_date, end_date, hold_type }`
|
||||
- **THEN** the service SHALL execute a single Oracle query (or read from cache) via `hold_dataset_cache.execute_primary_query()`
|
||||
- **THEN** the response SHALL return `{ success: true, data: { query_id, trend, reason_pareto, duration, list, summary } }`
|
||||
- **THEN** list SHALL contain page 1 with default per_page of 50
|
||||
|
||||
#### Scenario: Supplementary view endpoint
|
||||
- **WHEN** `GET /api/hold-history/view?query_id=...&hold_type=...&reason=...&page=...&per_page=...` is called
|
||||
- **THEN** the service SHALL read the cached DataFrame and derive filtered views via `hold_dataset_cache.apply_view()`
|
||||
- **THEN** no Oracle query SHALL be executed
|
||||
- **THEN** the response SHALL return `{ success: true, data: { trend, reason_pareto, duration, list } }`
|
||||
|
||||
#### Scenario: Cache expired on view request
|
||||
- **WHEN** GET /view is called with an expired query_id
|
||||
- **THEN** the response SHALL return `{ success: false, error: "cache_expired" }` with HTTP 410
|
||||
|
||||
#### Scenario: Trend uses shift boundary at 07:30
|
||||
- **WHEN** daily aggregation is calculated
|
||||
- **THEN** transactions with time >= 07:30 SHALL be attributed to the next calendar day
|
||||
- **THEN** transactions with time < 07:30 SHALL be attributed to the current calendar day
|
||||
|
||||
#### Scenario: Trend hold type classification
|
||||
- **WHEN** trend data is aggregated by hold type
|
||||
- **THEN** quality classification SHALL use the same NON_QUALITY_HOLD_REASONS set as existing hold endpoints
|
||||
|
||||
### Requirement: Hold History API SHALL provide reason Pareto data
|
||||
The reason Pareto data SHALL be derived from the cached dataset, not from a separate Oracle query.
|
||||
|
||||
#### Scenario: Reason Pareto from cache
|
||||
- **WHEN** reason Pareto is requested via GET /view with hold_type filter
|
||||
- **THEN** the cached DataFrame SHALL be filtered by hold_type and grouped by HOLDREASONNAME
|
||||
- **THEN** each item SHALL contain `{ reason, count, qty, pct, cumPct }`
|
||||
- **THEN** items SHALL be sorted by count descending
|
||||
|
||||
### Requirement: Hold History API SHALL provide hold duration distribution
|
||||
The duration distribution SHALL be derived from the cached dataset.
|
||||
|
||||
#### Scenario: Duration from cache
|
||||
- **WHEN** duration is requested via GET /view
|
||||
- **THEN** the cached DataFrame SHALL be filtered to released holds only
|
||||
- **THEN** 4 buckets SHALL be computed: <4h, 4-24h, 1-3d, >3d
|
||||
|
||||
### Requirement: Hold History API SHALL provide paginated detail list
|
||||
The detail list SHALL be paginated from the cached dataset.
|
||||
|
||||
#### Scenario: List pagination from cache
|
||||
- **WHEN** list is requested via GET /view with page and per_page params
|
||||
- **THEN** the cached DataFrame SHALL be filtered and paginated in-memory
|
||||
- **THEN** response SHALL include items and pagination metadata
|
||||
|
||||
### Requirement: Hold History API SHALL keep department endpoint as separate query
|
||||
The department endpoint SHALL remain as a separate Oracle query due to its unique person-level aggregation.
|
||||
|
||||
#### Scenario: Department endpoint unchanged
|
||||
- **WHEN** `GET /api/hold-history/department` is called
|
||||
- **THEN** it SHALL continue to execute its own Oracle query
|
||||
- **THEN** it SHALL NOT use the dataset cache
|
||||
@@ -0,0 +1,34 @@
|
||||
## MODIFIED Requirements
|
||||
|
||||
### Requirement: Hold History page SHALL display a filter bar with date range and hold type
|
||||
The page SHALL provide a filter bar for selecting date range and hold type classification. On query, the page SHALL use a two-phase flow: POST /query returns queryId, subsequent filter changes use GET /view.
|
||||
|
||||
#### Scenario: Primary query via POST /query
|
||||
- **WHEN** user clicks the query button (or page loads with default filters)
|
||||
- **THEN** the page SHALL call `POST /api/hold-history/query` with `{ start_date, end_date, hold_type }`
|
||||
- **THEN** the response queryId SHALL be stored for subsequent view requests
|
||||
- **THEN** trend, reason-pareto, duration, and list SHALL all be populated from the single response
|
||||
|
||||
#### Scenario: Hold type or reason filter change uses GET /view
|
||||
- **WHEN** user changes hold_type radio or clicks a reason in the Pareto chart (while queryId exists)
|
||||
- **THEN** the page SHALL call `GET /api/hold-history/view?query_id=...&hold_type=...&reason=...`
|
||||
- **THEN** no new Oracle query SHALL be triggered
|
||||
- **THEN** trend, reason-pareto, duration, and list SHALL update from the view response
|
||||
|
||||
#### Scenario: Pagination uses GET /view
|
||||
- **WHEN** user navigates to a different page in the detail list
|
||||
- **THEN** the page SHALL call `GET /api/hold-history/view?query_id=...&page=...&per_page=...`
|
||||
|
||||
#### Scenario: Date range change triggers new primary query
|
||||
- **WHEN** user changes the date range and clicks query
|
||||
- **THEN** the page SHALL call `POST /api/hold-history/query` with new dates
|
||||
- **THEN** a new queryId SHALL replace the old one
|
||||
|
||||
#### Scenario: Cache expired auto-retry
|
||||
- **WHEN** GET /view returns `{ success: false, error: "cache_expired" }`
|
||||
- **THEN** the page SHALL automatically re-execute `POST /api/hold-history/query` with the last committed filters
|
||||
- **THEN** the view SHALL refresh with the new data
|
||||
|
||||
#### Scenario: Department still uses separate API
|
||||
- **WHEN** department data needs to load or reload
|
||||
- **THEN** the page SHALL call `GET /api/hold-history/department` separately
|
||||
@@ -0,0 +1,71 @@
|
||||
## ADDED Requirements
|
||||
|
||||
### Requirement: Resource dataset cache SHALL execute a single Oracle query and cache the result
|
||||
The resource_dataset_cache module SHALL query Oracle once for the full shift-status fact set and cache it for subsequent derivations.
|
||||
|
||||
#### Scenario: Primary query execution and caching
|
||||
- **WHEN** `execute_primary_query()` is called with date range, granularity, and resource filter parameters
|
||||
- **THEN** a deterministic `query_id` SHALL be computed from all primary params using SHA256
|
||||
- **THEN** if a cached DataFrame exists for this query_id (L1 or L2), it SHALL be used without querying Oracle
|
||||
- **THEN** if no cache exists, a single Oracle query SHALL fetch all shift-status records from `DW_MES_RESOURCESTATUS_SHIFT` for the filtered resources and date range
|
||||
- **THEN** the result DataFrame SHALL be stored in both L1 (ProcessLevelCache) and L2 (Redis as parquet/base64)
|
||||
- **THEN** the response SHALL include `query_id`, summary (KPI, trend, heatmap, comparison), and detail page 1
|
||||
|
||||
#### Scenario: Cache TTL and eviction
|
||||
- **WHEN** a DataFrame is cached
|
||||
- **THEN** the cache TTL SHALL be 900 seconds (15 minutes)
|
||||
- **THEN** L1 cache max_size SHALL be 8 entries with LRU eviction
|
||||
- **THEN** the Redis namespace SHALL be `resource_dataset`
|
||||
|
||||
### Requirement: Resource dataset cache SHALL derive KPI summary from cached DataFrame
|
||||
The module SHALL compute aggregated KPI metrics from the cached fact set.
|
||||
|
||||
#### Scenario: KPI derivation from cache
|
||||
- **WHEN** summary view is derived from cached DataFrame
|
||||
- **THEN** total hours for PRD, SBY, UDT, SDT, EGT, NST SHALL be summed
|
||||
- **THEN** OU% and AVAIL% SHALL be computed from the hour totals
|
||||
- **THEN** machine count SHALL be the distinct count of HISTORYID in the cached data
|
||||
|
||||
### Requirement: Resource dataset cache SHALL derive trend data from cached DataFrame
|
||||
The module SHALL compute time-series aggregations from the cached fact set.
|
||||
|
||||
#### Scenario: Trend derivation
|
||||
- **WHEN** summary view is derived with a given granularity (day/week/month/year)
|
||||
- **THEN** the cached DataFrame SHALL be grouped by the granularity period
|
||||
- **THEN** each period SHALL include PRD, SBY, UDT, SDT, EGT, NST hours and computed OU%, AVAIL%
|
||||
|
||||
### Requirement: Resource dataset cache SHALL derive heatmap from cached DataFrame
|
||||
The module SHALL compute workcenter × date OU% matrix from the cached fact set.
|
||||
|
||||
#### Scenario: Heatmap derivation
|
||||
- **WHEN** summary view is derived
|
||||
- **THEN** the cached DataFrame SHALL be grouped by (workcenter, date)
|
||||
- **THEN** each cell SHALL contain the OU% for that workcenter on that date
|
||||
- **THEN** workcenters SHALL be sorted by workcenter_seq
|
||||
|
||||
### Requirement: Resource dataset cache SHALL derive workcenter comparison from cached DataFrame
|
||||
The module SHALL compute per-workcenter aggregated metrics from the cached fact set.
|
||||
|
||||
#### Scenario: Comparison derivation
|
||||
- **WHEN** summary view is derived
|
||||
- **THEN** the cached DataFrame SHALL be grouped by workcenter
|
||||
- **THEN** each workcenter SHALL include total hours and computed OU%
|
||||
- **THEN** results SHALL be sorted by OU% descending, limited to top 15
|
||||
|
||||
### Requirement: Resource dataset cache SHALL derive paginated detail from cached DataFrame
|
||||
The module SHALL provide hierarchical detail records from the cached fact set.
|
||||
|
||||
#### Scenario: Detail derivation and pagination
|
||||
- **WHEN** detail view is requested with page and per_page parameters
|
||||
- **THEN** the cached DataFrame SHALL be used to compute per-resource metrics
|
||||
- **THEN** resource dimension data (WORKCENTERNAME, RESOURCEFAMILYNAME) SHALL be merged from resource_cache
|
||||
- **THEN** results SHALL be structured as a hierarchical tree (workcenter → family → resource)
|
||||
- **THEN** pagination SHALL apply to the flattened list
|
||||
|
||||
### Requirement: Resource dataset cache SHALL handle cache expiry gracefully
|
||||
The module SHALL return appropriate signals when cache has expired.
|
||||
|
||||
#### Scenario: Cache expired during view request
|
||||
- **WHEN** a view is requested with a query_id whose cache has expired
|
||||
- **THEN** the response SHALL return `{ success: false, error: "cache_expired" }`
|
||||
- **THEN** the HTTP status SHALL be 410 (Gone)
|
||||
@@ -0,0 +1,46 @@
|
||||
## MODIFIED Requirements
|
||||
|
||||
### Requirement: Resource History page SHALL support date range and granularity selection
|
||||
The page SHALL allow users to specify time range and aggregation granularity. On query, the page SHALL use a two-phase flow: POST /query returns queryId, subsequent filter changes use GET /view.
|
||||
|
||||
#### Scenario: Primary query via POST /query
|
||||
- **WHEN** user clicks the query button
|
||||
- **THEN** the page SHALL call `POST /api/resource/history/query` with date range, granularity, and resource filters
|
||||
- **THEN** the response queryId SHALL be stored for subsequent view requests
|
||||
- **THEN** summary (KPI, trend, heatmap, comparison) and detail page 1 SHALL all be populated from the single response
|
||||
|
||||
#### Scenario: Filter change uses GET /view
|
||||
- **WHEN** user changes supplementary filters (workcenter groups, families, machines, equipment type) while queryId exists
|
||||
- **THEN** the page SHALL call `GET /api/resource/history/view?query_id=...&filters...`
|
||||
- **THEN** no new Oracle query SHALL be triggered
|
||||
- **THEN** all charts, KPI cards, and detail table SHALL update from the view response
|
||||
|
||||
#### Scenario: Pagination uses GET /view
|
||||
- **WHEN** user navigates to a different page in the detail table
|
||||
- **THEN** the page SHALL call `GET /api/resource/history/view?query_id=...&page=...`
|
||||
|
||||
#### Scenario: Date range or granularity change triggers new primary query
|
||||
- **WHEN** user changes date range or granularity and clicks query
|
||||
- **THEN** the page SHALL call `POST /api/resource/history/query` with new params
|
||||
- **THEN** a new queryId SHALL replace the old one
|
||||
|
||||
#### Scenario: Cache expired auto-retry
|
||||
- **WHEN** GET /view returns `{ success: false, error: "cache_expired" }`
|
||||
- **THEN** the page SHALL automatically re-execute `POST /api/resource/history/query` with the last committed filters
|
||||
- **THEN** the view SHALL refresh with the new data
|
||||
|
||||
### Requirement: Resource History page SHALL display KPI summary cards
|
||||
The page SHALL show 9 KPI cards with aggregated performance metrics derived from the cached dataset.
|
||||
|
||||
#### Scenario: KPI cards from cached data
|
||||
- **WHEN** summary data is derived from the cached DataFrame
|
||||
- **THEN** 9 cards SHALL display: OU%, AVAIL%, PRD, SBY, UDT, SDT, EGT, NST, Machine Count
|
||||
- **THEN** values SHALL be computed from the cached shift-status records, not from a separate Oracle query
|
||||
|
||||
### Requirement: Resource History page SHALL display hierarchical detail table
|
||||
The page SHALL show a three-level expandable table derived from the cached dataset.
|
||||
|
||||
#### Scenario: Detail table from cached data
|
||||
- **WHEN** detail data is derived from the cached DataFrame
|
||||
- **THEN** a tree table SHALL display with the same columns and hierarchy as before
|
||||
- **THEN** data SHALL be derived in-memory from the cached DataFrame, not from a separate Oracle query
|
||||
@@ -0,0 +1,42 @@
|
||||
## 1. Hold-History Dataset Cache (Backend)
|
||||
|
||||
- [x] 1.1 Create `src/mes_dashboard/services/hold_dataset_cache.py` — module scaffolding: imports, logger, ProcessLevelCache (TTL=900, max_size=8), Redis namespace `hold_dataset`, `_make_query_id()`, `_redis_store_df()` / `_redis_load_df()`, `_get_cached_df()` / `_store_df()`
|
||||
- [x] 1.2 Implement `execute_primary_query(start_date, end_date)` — single Oracle query fetching ALL hold/release facts for date range (all hold_types), cache result, derive initial view (trend, reason_pareto, duration, list page 1) using existing service functions
|
||||
- [x] 1.3 Implement `apply_view(query_id, hold_type, reason, page, per_page)` — read cached DF, apply hold_type filter, derive trend + reason_pareto + duration + paginated list from filtered DF; return 410 on cache miss
|
||||
- [x] 1.4 Implement in-memory derivation helpers: `_derive_trend(df, hold_type)`, `_derive_reason_pareto(df, hold_type)`, `_derive_duration(df, hold_type)`, `_derive_list(df, hold_type, reason, page, per_page)` — reuse shift boundary and hold_type classification logic from `hold_history_service.py`
|
||||
|
||||
## 2. Hold-History Routes (Backend)
|
||||
|
||||
- [x] 2.1 Add `POST /api/hold-history/query` route — parse body `{ start_date, end_date, hold_type }`, call `hold_dataset_cache.execute_primary_query()`, return `{ query_id, trend, reason_pareto, duration, list, summary }`
|
||||
- [x] 2.2 Add `GET /api/hold-history/view` route — parse query params `query_id, hold_type, reason, page, per_page`, call `hold_dataset_cache.apply_view()`, return derived views or 410 on cache miss
|
||||
- [x] 2.3 Remove old GET endpoints: `/api/hold-history/trend`, `/api/hold-history/reason-pareto`, `/api/hold-history/duration`, `/api/hold-history/list` — keep page route and department route (if exists)
|
||||
|
||||
## 3. Hold-History Frontend
|
||||
|
||||
- [x] 3.1 Rewrite `frontend/src/hold-history/App.vue` — two-phase flow: initial load calls `POST /query` → store queryId; hold_type change, reason filter, pagination call `GET /view?query_id=...`; cache expired (410) → auto re-execute primary query
|
||||
- [x] 3.2 Derive summary KPI cards from trend data returned by query/view response (no separate API call)
|
||||
- [x] 3.3 Update all chart components to consume data from the unified query/view response instead of individual API results
|
||||
|
||||
## 4. Resource-History Dataset Cache (Backend)
|
||||
|
||||
- [x] 4.1 Create `src/mes_dashboard/services/resource_dataset_cache.py` — module scaffolding: same cache infrastructure (TTL=900, max_size=8), Redis namespace `resource_dataset`, query ID helpers, L1+L2 cache read/write
|
||||
- [x] 4.2 Implement `execute_primary_query(params)` — single Oracle query fetching ALL shift-status records for date range + resource filters, cache result, derive initial view (summary: kpi + trend + heatmap + comparison, detail page 1) using existing service functions
|
||||
- [x] 4.3 Implement `apply_view(query_id, granularity, page, per_page)` — read cached DF, derive summary + paginated detail; return 410 on cache miss
|
||||
- [x] 4.4 Implement in-memory derivation helpers: `_derive_kpi(df)`, `_derive_trend(df, granularity)`, `_derive_heatmap(df)`, `_derive_comparison(df)`, `_derive_detail(df, page, per_page)` — reuse aggregation logic from `resource_history_service.py`
|
||||
|
||||
## 5. Resource-History Routes (Backend)
|
||||
|
||||
- [x] 5.1 Add `POST /api/resource/history/query` route — parse body with date range, granularity, resource filters, call `resource_dataset_cache.execute_primary_query()`, return `{ query_id, summary, detail }`
|
||||
- [x] 5.2 Add `GET /api/resource/history/view` route — parse query params `query_id, granularity, page, per_page`, call `resource_dataset_cache.apply_view()`, return derived views or 410
|
||||
- [x] 5.3 Remove old GET endpoints: `/api/resource/history/summary`, `/api/resource/history/detail` — keep `/options` and `/export` endpoints
|
||||
|
||||
## 6. Resource-History Frontend
|
||||
|
||||
- [x] 6.1 Rewrite `frontend/src/resource-history/App.vue` — two-phase flow: query button calls `POST /query` → store queryId; filter changes call `GET /view?query_id=...`; cache expired → auto re-execute
|
||||
- [x] 6.2 Update `executeCommittedQuery()` to use POST /query instead of parallel GET summary + GET detail
|
||||
- [x] 6.3 Update all chart/table components to consume data from unified query/view response
|
||||
|
||||
## 7. Verification
|
||||
|
||||
- [x] 7.1 Run `python -m pytest tests/ -v` — no new test failures
|
||||
- [x] 7.2 Run `cd frontend && npm run build` — frontend builds successfully
|
||||
Reference in New Issue
Block a user