Two changes combined: 1. historical-query-slow-connection: Migrate all historical query pages to read_sql_df_slow with semaphore concurrency control (max 3), raise DB slow timeout to 300s, gunicorn timeout to 360s, and unify frontend timeouts to 360s for all historical pages. 2. hold-resource-history-dataset-cache: Convert hold-history and resource-history from multi-query to single-query + dataset cache pattern (L1 ProcessLevelCache + L2 Redis parquet/base64, TTL=900s). Replace old GET endpoints with POST /query + GET /view two-phase API. Frontend auto-retries on 410 cache_expired. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
556 B
556 B
MODIFIED Requirements
Requirement: Database query execution path
The hold-history service (hold_history_service.py) SHALL use read_sql_df_slow (dedicated connection) instead of read_sql_df (pooled connection) for all Oracle queries.
Scenario: Hold history queries use dedicated connection
- WHEN any hold-history query is executed (trend, pareto, duration, list)
- THEN it uses
read_sql_df_slowwhich creates a dedicated Oracle connection outside the pool - AND the connection has a 300-second call_timeout (configurable)