Two changes combined: 1. historical-query-slow-connection: Migrate all historical query pages to read_sql_df_slow with semaphore concurrency control (max 3), raise DB slow timeout to 300s, gunicorn timeout to 360s, and unify frontend timeouts to 360s for all historical pages. 2. hold-resource-history-dataset-cache: Convert hold-history and resource-history from multi-query to single-query + dataset cache pattern (L1 ProcessLevelCache + L2 Redis parquet/base64, TTL=900s). Replace old GET endpoints with POST /query + GET /view two-phase API. Frontend auto-retries on 410 cache_expired. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
882 B
882 B
MODIFIED Requirements
Requirement: Database query execution path
The resource-history service (resource_history_service.py) SHALL use read_sql_df_slow (dedicated connection) instead of read_sql_df (pooled connection) for all Oracle queries.
Scenario: Summary parallel queries use dedicated connections
- WHEN the resource-history summary query executes 3 parallel queries via ThreadPoolExecutor
- THEN each query uses
read_sql_df_slowand acquires a semaphore slot - AND all 3 queries complete and release their slots
Requirement: Frontend timeout
The resource-history page frontend SHALL use a 360-second API timeout for all Oracle-backed API calls.
Scenario: Large date range query completes
- WHEN a user queries resource history for a 2-year date range
- THEN the frontend does not abort the request for at least 360 seconds