feat: 新增 Resource Cache 模組與表名更新
- 新增 resource_cache.py 模組,Redis 快取 DW_MES_RESOURCE 表 - 實作每 4 小時背景同步(MAX(LASTCHANGEDATE) 版本控制) - 整合 filter_cache 優先從 WIP Redis 快取載入站點群組 - 整合 health 端點顯示 resource_cache 狀態 - 修改 resource_service 與 resource_history_service 使用快取 - 更新表名 DWH.DW_PJ_LOT_V → DW_MES_LOT_V - 新增單元測試 (28 tests) 與 E2E 測試 (15 tests) - 修復 wip_service 測試的 cache mock 問題 - 新增 Oracle 授權物件文檔與查詢工具 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
11
.env.example
11
.env.example
@@ -72,3 +72,14 @@ REDIS_KEY_PREFIX=mes_wip
|
||||
|
||||
# Cache check interval in seconds (default: 600 = 10 minutes)
|
||||
CACHE_CHECK_INTERVAL=600
|
||||
|
||||
# ============================================================
|
||||
# Resource Cache Configuration
|
||||
# ============================================================
|
||||
# Enable/disable Resource cache (DW_MES_RESOURCE)
|
||||
# When disabled, queries will fallback to Oracle directly
|
||||
RESOURCE_CACHE_ENABLED=true
|
||||
|
||||
# Resource cache sync interval in seconds (default: 14400 = 4 hours)
|
||||
# The cache will check for updates at this interval using MAX(LASTCHANGEDATE)
|
||||
RESOURCE_SYNC_INTERVAL=14400
|
||||
|
||||
@@ -36,5 +36,11 @@
|
||||
"status": "dev"
|
||||
}
|
||||
],
|
||||
"api_public": true
|
||||
"api_public": true,
|
||||
"db_scan": {
|
||||
"schema": "DWH",
|
||||
"updated_at": "2026-01-29 13:49:59",
|
||||
"object_count": 19,
|
||||
"source": "tools/query_table_schema.py"
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,8 +1,8 @@
|
||||
# MES 核心表詳細分析報告
|
||||
|
||||
**生成時間**: 2026-01-14(最後更新: 2026-01-27)
|
||||
**分析範圍**: 17 張 MES 核心表(含 1 張 DWH 即時視圖)
|
||||
**資料來源**: MES_Database_Reference.md, DWH.DW_PJ_LOT_V 實際數據分析
|
||||
**生成時間**: 2026-01-14(最後更新: 2026-01-29)
|
||||
**分析範圍**: 19 張 MES 核心表(含 2 張 DWH 即時視圖 + 1 張工站對照視圖)
|
||||
**資料來源**: MES_Database_Reference.md, DW_MES_LOT_V 實際數據分析, DW_MES_EQUIPMENTSTATUS_WIP_V 實際數據分析, DW_MES_SPEC_WORKCENTER_V 實際數據分析
|
||||
|
||||
---
|
||||
|
||||
@@ -19,59 +19,61 @@
|
||||
|
||||
## 表性質分類總覽
|
||||
|
||||
### 即時數據表(Real-time Views)
|
||||
透過 DB Link 從 DWH 取得的即時 WIP 視圖,每 5 分鐘自動更新
|
||||
### 即時數據表(Real-time Views)
|
||||
透過 DB Link 從 DWH 取得的即時 WIP / 設備狀態視圖,依來源更新頻率提供
|
||||
|
||||
| 表名 | 數據量 | 主要用途 | 更新方式 |
|
||||
|------|--------|---------|---------|
|
||||
| **DWH.DW_PJ_LOT_V** | ~9,000-12,000 | 即時 WIP 分布(70欄位) | 每 5 分鐘從 DWH 同步 |
|
||||
| **DW_MES_LOT_V** | ~9,468 | 即時 WIP 分布(70欄位) | DB Link 即時查詢(依 PJ_LOT_MV 更新頻率) |
|
||||
| **DW_MES_EQUIPMENTSTATUS_WIP_V** | ~2,631 | 設備資產狀態 + WIP 追蹤(32欄位) | DB Link 即時查詢(真正即時表) |
|
||||
|
||||
### 現況快照表(Snapshot Tables)
|
||||
存儲當前狀態的數據,數據會被更新或覆蓋
|
||||
|
||||
| 表名 | 數據量 | 主要用途 | 更新方式 |
|
||||
|------|--------|---------|---------|
|
||||
| **DW_MES_WIP** | 77,470,834 | 在制品現況(含歷史累積) | 隨生產流程更新 |
|
||||
| **DW_MES_RESOURCE** | 90,620 | 資源主檔(設備/工位) | 異動時更新 |
|
||||
| **DW_MES_CONTAINER** | 5,185,532 | 容器當前狀態 | 隨批次流轉更新 |
|
||||
| **DW_MES_JOB** | 1,239,659 | 設備維修工單當前狀態 | 維修工單狀態變更時更新 |
|
||||
| **DW_MES_WIP** | 79,058,085 | 在制品現況(含歷史累積) | 隨生產流程更新 |
|
||||
| **DW_MES_RESOURCE** | 91,329 | 資源主檔(設備/工位) | 異動時更新 |
|
||||
| **DW_MES_CONTAINER** | 5,218,406 | 容器當前狀態 | 隨批次流轉更新 |
|
||||
| **DW_MES_JOB** | 1,248,622 | 設備維修工單當前狀態 | 維修工單狀態變更時更新 |
|
||||
|
||||
### 歷史累積表(Historical Tables)
|
||||
只新增不修改,記錄完整的歷史軌跡
|
||||
|
||||
| 表名 | 數據量 | 主要用途 | 累積方式 |
|
||||
|------|--------|---------|---------|
|
||||
| **DW_MES_RESOURCESTATUS** | 65,139,825 | 資源狀態變更歷史 | 狀態變更時新增記錄 |
|
||||
| **DW_MES_RESOURCESTATUS_SHIFT** | 74,155,046 | 資源班次狀態歷史 | 班次資料匯總新增 |
|
||||
| **DW_MES_LOTWIPHISTORY** | 53,085,425 | 批次流轉歷史 | 每次移出/移入新增 |
|
||||
| **DW_MES_LOTWIPDATAHISTORY** | 77,168,503 | 批次數據變更歷史 | 數據採集時新增 |
|
||||
| **DW_MES_HM_LOTMOVEOUT** | 48,374,309 | 批次移出事件 | 移出操作時新增 |
|
||||
| **DW_MES_JOBTXNHISTORY** | 9,488,096 | 維修工單交易歷史 | 維修工單狀態變更新增 |
|
||||
| **DW_MES_LOTREJECTHISTORY** | 15,678,513 | 批次拒絕歷史 | 報廢操作時新增 |
|
||||
| **DW_MES_LOTMATERIALSHISTORY** | 17,702,828 | 物料消耗歷史 | 物料使用時新增 |
|
||||
| **DW_MES_HOLDRELEASEHISTORY** | 310,033 | 暫停/釋放歷史 | Hold/Release時新增 |
|
||||
| **DW_MES_MAINTENANCE** | 50,954,850 | 設備維護歷史 | 維護活動時新增 |
|
||||
| **DW_MES_RESOURCESTATUS** | 65,742,614 | 資源狀態變更歷史 | 狀態變更時新增記錄 |
|
||||
| **DW_MES_RESOURCESTATUS_SHIFT** | 74,820,134 | 資源班次狀態歷史 | 班次資料匯總新增 |
|
||||
| **DW_MES_LOTWIPHISTORY** | 53,454,213 | 批次流轉歷史 | 每次移出/移入新增 |
|
||||
| **DW_MES_LOTWIPDATAHISTORY** | 77,960,216 | 批次數據變更歷史 | 數據採集時新增 |
|
||||
| **DW_MES_HM_LOTMOVEOUT** | 48,645,692 | 批次移出事件 | 移出操作時新增 |
|
||||
| **DW_MES_JOBTXNHISTORY** | 9,554,723 | 維修工單交易歷史 | 維修工單狀態變更新增 |
|
||||
| **DW_MES_LOTREJECTHISTORY** | 15,786,025 | 批次拒絕歷史 | 報廢操作時新增 |
|
||||
| **DW_MES_LOTMATERIALSHISTORY** | 17,829,931 | 物料消耗歷史 | 物料使用時新增 |
|
||||
| **DW_MES_HOLDRELEASEHISTORY** | 310,737 | 暫停/釋放歷史 | Hold/Release時新增 |
|
||||
| **DW_MES_MAINTENANCE** | 52,060,026 | 設備維護歷史 | 維護活動時新增 |
|
||||
|
||||
### 輔助表(Auxiliary Tables)
|
||||
|
||||
| 表名 | 數據量 | 主要用途 |
|
||||
|------|--------|---------|
|
||||
| **DW_MES_PARTREQUESTORDER** | 61,396 | 物料請求訂單 |
|
||||
| **DW_MES_PJ_COMBINEDASSYLOTS** | 1,955,691 | 組合裝配批次 |
|
||||
| **DW_MES_PARTREQUESTORDER** | 61,396 | 物料請求訂單 |
|
||||
| **DW_MES_PJ_COMBINEDASSYLOTS** | 1,965,425 | 組合裝配批次 |
|
||||
| **DW_MES_SPEC_WORKCENTER_V** | 230 | 工站/工序對照視圖 |
|
||||
|
||||
---
|
||||
|
||||
## 即時數據表分析
|
||||
|
||||
### DWH.DW_PJ_LOT_V(即時 WIP 批次視圖)⭐⭐⭐
|
||||
### DW_MES_LOT_V(即時 WIP 批次視圖)⭐⭐⭐
|
||||
|
||||
**表性質**: 即時數據視圖(Real-time View)
|
||||
|
||||
**業務定義**: DWH 提供的即時 WIP 視圖,透過 DB Link 從 `PJ_LOT_MV@DWDB_MESDB` 取得,每 5 分鐘自動更新。包含完整的批次狀態、工站位置、設備資訊、Hold 原因等 70 個欄位,是 WIP Dashboard 的主要數據源。
|
||||
**業務定義**: DWH 提供的即時 WIP 視圖,透過 DB Link 從 `PJ_LOT_MV@DWDB_MESDB` 取得,依 PJ_LOT_MV 更新頻率提供。包含完整的批次狀態、工站位置、設備資訊、Hold 原因等 70 個欄位,是 WIP Dashboard 的主要數據源。
|
||||
|
||||
**數據來源**: `PJ_LOT_MV@DWDB_MESDB`(DB Link 連線)
|
||||
|
||||
**數據量**: 約 9,000 - 12,000 筆(動態變化)
|
||||
**數據量**: 約 9,468 筆(2026-01-29 查詢)
|
||||
|
||||
#### 欄位分類總覽(70 欄位)
|
||||
|
||||
@@ -210,7 +212,7 @@ SELECT
|
||||
SUM(QTY) as TOTAL_QTY,
|
||||
SUM(CASE WHEN STATUS = 'HOLD' THEN 1 ELSE 0 END) as HOLD_LOTS,
|
||||
SUM(CASE WHEN STATUS = 'HOLD' THEN QTY ELSE 0 END) as HOLD_QTY
|
||||
FROM DWH.DW_PJ_LOT_V
|
||||
FROM DW_MES_LOT_V
|
||||
WHERE OWNER NOT IN ('DUMMY') -- 排除 DUMMY 批次
|
||||
GROUP BY WORKCENTER_GROUP, WORKCENTER_SHORT, WORKCENTERSEQUENCE_GROUP
|
||||
ORDER BY TO_NUMBER(WORKCENTERSEQUENCE_GROUP);
|
||||
@@ -223,7 +225,7 @@ SELECT
|
||||
PRODUCTLINENAME,
|
||||
COUNT(*) as LOT_COUNT,
|
||||
SUM(QTY) as TOTAL_QTY
|
||||
FROM DWH.DW_PJ_LOT_V
|
||||
FROM DW_MES_LOT_V
|
||||
WHERE OWNER NOT IN ('DUMMY')
|
||||
GROUP BY WORKCENTER_GROUP, PRODUCTLINENAME
|
||||
ORDER BY WORKCENTER_GROUP, LOT_COUNT DESC;
|
||||
@@ -241,7 +243,7 @@ SELECT
|
||||
HOLDEMP,
|
||||
COMMENT_HOLD,
|
||||
AGEBYDAYS
|
||||
FROM DWH.DW_PJ_LOT_V
|
||||
FROM DW_MES_LOT_V
|
||||
WHERE STATUS = 'HOLD'
|
||||
ORDER BY AGEBYDAYS DESC;
|
||||
```
|
||||
@@ -254,7 +256,7 @@ SELECT
|
||||
COALESCE(EQUIPMENTNAME, EQUIPMENTS) as EQUIPMENT_INFO,
|
||||
EQUIPMENTCOUNT,
|
||||
QTY
|
||||
FROM DWH.DW_PJ_LOT_V
|
||||
FROM DW_MES_LOT_V
|
||||
WHERE COALESCE(EQUIPMENTNAME, EQUIPMENTS) IS NOT NULL
|
||||
ORDER BY WORKCENTERNAME;
|
||||
```
|
||||
@@ -281,7 +283,7 @@ SELECT
|
||||
OWNER,
|
||||
COALESCE(EQUIPMENTNAME, EQUIPMENTS) as EQUIPMENT,
|
||||
SYS_DATE
|
||||
FROM DWH.DW_PJ_LOT_V
|
||||
FROM DW_MES_LOT_V
|
||||
WHERE LOTID LIKE 'GA26011%' -- 工單篩選
|
||||
ORDER BY WORKCENTERSEQUENCE;
|
||||
```
|
||||
@@ -308,7 +310,238 @@ ORDER BY WORKCENTERSEQUENCE;
|
||||
|
||||
---
|
||||
|
||||
## 現況快照表分析
|
||||
### DW_MES_EQUIPMENTSTATUS_WIP_V(設備狀態 + WIP 追蹤視圖)⭐⭐
|
||||
|
||||
**表性質**: 即時數據視圖(Real-time View)
|
||||
|
||||
**業務定義**: DWH 提供設備資產狀態與 WIP 追蹤資料的即時視圖,透過 DB Link 直接查詢 `PJ_EquipmentStatus_WIP_V@DWDB_MESDB`,屬於真正即時表(非同步快照)。整合設備狀態、維修工單與批次 Track-In 及 Wafer/封裝資訊,適合做設備狀態與當前 WIP 關聯分析。
|
||||
|
||||
**數據來源**: `PJ_EquipmentStatus_WIP_V@DWDB_MESDB`(DB Link 連線)
|
||||
|
||||
**數據量**: 約 2,631 筆(2026-01-29 查詢)
|
||||
|
||||
#### 欄位分類總覽(32 欄位)
|
||||
|
||||
| 分類 | 欄位數 | 說明 |
|
||||
|------|--------|------|
|
||||
| 設備/資源識別 | 3 | RESOURCEID, EQUIPMENTID, OBJECTCATEGORY |
|
||||
| 設備狀態 | 2 | EQUIPMENTASSETSSTATUS, EQUIPMENTASSETSSTATUSREASON |
|
||||
| 維修工單 | 11 | JOBORDER, JOBMODEL, JOBSTAGE, JOBID, JOBSTATUS, CREATEDATE, CREATEUSERNAME, CREATEUSER, SYMPTOMCODE, CAUSECODE, REPAIRCODE |
|
||||
| WIP/產品 | 7 | RUNCARDLOTID, "Package", PACKAGE_LF, "Function", TYPE, BOP, SPEC |
|
||||
| Wafer/材料 | 6 | WAFERLOTID, WAFERPN, WAFERLOTID_PREFIX, LFOPTIONID, WIREDESCRIPTION, WAFERMIL |
|
||||
| Track-In | 3 | LOTTRACKINQTY_PCS, LOTTRACKINTIME, LOTTRACKINEMPLOYEE |
|
||||
|
||||
#### 關鍵欄位說明
|
||||
|
||||
#### 欄位清單與說明(32 欄位)
|
||||
|
||||
| 欄位名 | 類型 | 欄位功能說明 |
|
||||
|--------|------|--------------|
|
||||
| `RESOURCEID` | CHAR(16) | 資源/設備資源 ID(資源主檔識別碼) |
|
||||
| `EQUIPMENTID` | VARCHAR2(40) | 設備編號(機台代號) |
|
||||
| `OBJECTCATEGORY` | VARCHAR2(40) | 類別/製程分類(如 ASSEMBLY) |
|
||||
| `EQUIPMENTASSETSSTATUS` | VARCHAR2(40) | 設備資產狀態(如 PRD、IDLE) |
|
||||
| `EQUIPMENTASSETSSTATUSREASON` | VARCHAR2(40) | 設備狀態原因/說明(如 Production RUN) |
|
||||
| `JOBORDER` | VARCHAR2(40) | 維修工單號 |
|
||||
| `JOBMODEL` | VARCHAR2(40) | 維修工單機型/型號 |
|
||||
| `JOBSTAGE` | VARCHAR2(40) | 維修工單階段 |
|
||||
| `JOBID` | CHAR(16) | 維修工單內部 ID |
|
||||
| `JOBSTATUS` | VARCHAR2(40) | 維修工單狀態 |
|
||||
| `CREATEDATE` | DATE | 工單建立時間 |
|
||||
| `CREATEUSERNAME` | VARCHAR2(40) | 建立者帳號 |
|
||||
| `CREATEUSER` | VARCHAR2(255) | 建立者姓名/顯示名稱 |
|
||||
| `SYMPTOMCODE` | VARCHAR2(40) | 維修症狀代碼 |
|
||||
| `CAUSECODE` | VARCHAR2(40) | 故障原因代碼 |
|
||||
| `REPAIRCODE` | VARCHAR2(40) | 維修處置代碼 |
|
||||
| `RUNCARDLOTID` | VARCHAR2(40) | 批次號(Run card lot id) |
|
||||
| `"Package"` | VARCHAR2(40) | 封裝型號(需雙引號保留大小寫) |
|
||||
| `PACKAGE_LF` | VARCHAR2(4000) | 封裝/Leadframe 類型或描述 |
|
||||
| `"Function"` | VARCHAR2(40) | 產品功能分類(需雙引號保留大小寫) |
|
||||
| `TYPE` | VARCHAR2(40) | 產品型號 |
|
||||
| `BOP` | VARCHAR2(40) | BOP 代碼 |
|
||||
| `WAFERLOTID` | VARCHAR2(40) | Wafer Lot 編號 |
|
||||
| `WAFERPN` | VARCHAR2(40) | Wafer 料號 |
|
||||
| `WAFERLOTID_PREFIX` | VARCHAR2(160) | Wafer Lot 前綴 |
|
||||
| `SPEC` | VARCHAR2(40) | 製程/工序規格 |
|
||||
| `LFOPTIONID` | VARCHAR2(4000) | Leadframe Option |
|
||||
| `WIREDESCRIPTION` | VARCHAR2(4000) | Wire 描述 |
|
||||
| `WAFERMIL` | VARCHAR2(3062) | Wafer 規格/厚度 |
|
||||
| `LOTTRACKINQTY_PCS` | NUMBER | Track-In 數量(PCS) |
|
||||
| `LOTTRACKINTIME` | DATE | Track-In 時間 |
|
||||
| `LOTTRACKINEMPLOYEE` | VARCHAR2(255) | Track-In 人員 |
|
||||
|
||||
##### 設備狀態欄位
|
||||
|
||||
| 欄位名 | 類型 | 說明 | 範例值 |
|
||||
|--------|------|------|--------|
|
||||
| `EQUIPMENTASSETSSTATUS` | VARCHAR2(40) | 設備資產狀態 | `PRD` |
|
||||
| `EQUIPMENTASSETSSTATUSREASON` | VARCHAR2(40) | 狀態原因 | `Production RUN` |
|
||||
| `OBJECTCATEGORY` | VARCHAR2(40) | 類別/製程分類 | `ASSEMBLY` |
|
||||
|
||||
##### 批次與產品欄位
|
||||
|
||||
| 欄位名 | 類型 | 說明 | 範例值 |
|
||||
|--------|------|------|--------|
|
||||
| `RUNCARDLOTID` | VARCHAR2(40) | 批次號(Run card lot id) | `GA26011480-A00-006` |
|
||||
| `"Package"` | VARCHAR2(40) | 封裝型號 | `DFN2510-10L` |
|
||||
| `"Function"` | VARCHAR2(40) | 產品功能分類 | `TVS/ESD` |
|
||||
| `TYPE` | VARCHAR2(40) | 產品型號 | `PE1605M4AQ` |
|
||||
| `BOP` | VARCHAR2(40) | BOP 代碼 | `ECA08` |
|
||||
| `SPEC` | VARCHAR2(40) | 工序規格 | `元件切割` |
|
||||
|
||||
##### Track-In 與 Wafer 欄位
|
||||
|
||||
| 欄位名 | 類型 | 說明 |
|
||||
|--------|------|------|
|
||||
| `LOTTRACKINQTY_PCS` | NUMBER | Track-In 數量(PCS) |
|
||||
| `LOTTRACKINTIME` | DATE | Track-In 時間 |
|
||||
| `LOTTRACKINEMPLOYEE` | VARCHAR2(255) | Track-In 人員 |
|
||||
| `WAFERLOTID` | VARCHAR2(40) | Wafer Lot |
|
||||
| `WAFERPN` | VARCHAR2(40) | Wafer 料號 |
|
||||
| `WAFERLOTID_PREFIX` | VARCHAR2(160) | Wafer Lot 前綴 |
|
||||
| `LFOPTIONID` | VARCHAR2(4000) | Leadframe Option |
|
||||
| `WIREDESCRIPTION` | VARCHAR2(4000) | Wire 描述 |
|
||||
| `WAFERMIL` | VARCHAR2(3062) | Wafer 厚度/規格 |
|
||||
|
||||
#### 查詢策略
|
||||
|
||||
**1. 設備狀態分布**
|
||||
```sql
|
||||
SELECT
|
||||
OBJECTCATEGORY,
|
||||
EQUIPMENTASSETSSTATUS,
|
||||
EQUIPMENTASSETSSTATUSREASON,
|
||||
COUNT(*) as EQUIPMENT_COUNT
|
||||
FROM DW_MES_EQUIPMENTSTATUS_WIP_V
|
||||
GROUP BY OBJECTCATEGORY, EQUIPMENTASSETSSTATUS, EQUIPMENTASSETSSTATUSREASON
|
||||
ORDER BY OBJECTCATEGORY, EQUIPMENT_COUNT DESC;
|
||||
```
|
||||
|
||||
**2. 設備對應 WIP 批次(含 Track-In)**
|
||||
```sql
|
||||
SELECT
|
||||
EQUIPMENTID,
|
||||
RUNCARDLOTID,
|
||||
"Package" as PACKAGE,
|
||||
"Function" as FUNCTION,
|
||||
TYPE,
|
||||
BOP,
|
||||
SPEC,
|
||||
LOTTRACKINQTY_PCS,
|
||||
LOTTRACKINTIME
|
||||
FROM DW_MES_EQUIPMENTSTATUS_WIP_V
|
||||
WHERE RUNCARDLOTID IS NOT NULL
|
||||
ORDER BY LOTTRACKINTIME DESC;
|
||||
```
|
||||
|
||||
**3. 維修工單清單**
|
||||
```sql
|
||||
SELECT
|
||||
EQUIPMENTID,
|
||||
JOBORDER,
|
||||
JOBMODEL,
|
||||
JOBSTAGE,
|
||||
JOBSTATUS,
|
||||
CREATEDATE,
|
||||
SYMPTOMCODE,
|
||||
CAUSECODE,
|
||||
REPAIRCODE
|
||||
FROM DW_MES_EQUIPMENTSTATUS_WIP_V
|
||||
WHERE JOBORDER IS NOT NULL
|
||||
ORDER BY CREATEDATE DESC;
|
||||
```
|
||||
|
||||
**4. Wafer/材料分布**
|
||||
```sql
|
||||
SELECT
|
||||
WAFERPN,
|
||||
WAFERLOTID_PREFIX,
|
||||
COUNT(*) as LOT_COUNT
|
||||
FROM DW_MES_EQUIPMENTSTATUS_WIP_V
|
||||
WHERE WAFERPN IS NOT NULL
|
||||
GROUP BY WAFERPN, WAFERLOTID_PREFIX
|
||||
ORDER BY LOT_COUNT DESC;
|
||||
```
|
||||
|
||||
#### 與其他表的關聯
|
||||
|
||||
| 關聯表 | 關聯欄位 | 用途 |
|
||||
|--------|---------|------|
|
||||
| DW_MES_LOT_V | RUNCARDLOTID ↔ LOTID | 對照批次狀態/工站資訊 |
|
||||
| DW_MES_WIP | RUNCARDLOTID ↔ CONTAINERNAME | 取得批次現況與工單資訊 |
|
||||
| DW_MES_RESOURCE | EQUIPMENTID / RESOURCEID | 取得設備主檔/資源資訊 |
|
||||
|
||||
#### 重要注意事項
|
||||
|
||||
⚠️ **資料更新頻率**: DB Link 即時查詢,查詢時可搭配 `LOTTRACKINTIME` 判斷新鮮度
|
||||
|
||||
⚠️ **欄位大小寫**: `"Package"`、`"Function"` 為**引用欄位**,查詢需使用雙引號保留大小寫
|
||||
|
||||
⚠️ **欄位空值**: 維修工單與 Wafer/材料欄位常為 NULL,需依使用情境加條件
|
||||
|
||||
⚠️ **無資料庫備註**: 此視圖無 Oracle 欄位備註(ALL_COL_COMMENTS 為空),欄位說明請參考本文件
|
||||
|
||||
---
|
||||
|
||||
### DW_MES_SPEC_WORKCENTER_V(工站/工序對照視圖)⭐
|
||||
|
||||
**表性質**: 對照視圖(Mapping View)
|
||||
|
||||
**業務定義**: 由 `MES_SPEC`、`MES_OPERATION`、`MES_WORKCENTER` 組合,提供 SPEC 與工站名稱、分組與排序欄位的對照表。可用於統一工站命名與排序規則,補足報表分群需求。
|
||||
|
||||
**數據來源**: `MES_SPEC`, `MES_OPERATION`, `MES_WORKCENTER`(DWH 本地表)
|
||||
|
||||
**數據量**: 230 筆(2026-01-29 查詢)
|
||||
|
||||
#### 欄位說明(9 欄位)
|
||||
|
||||
| 欄位名 | 類型 | 說明 |
|
||||
|--------|------|------|
|
||||
| `SPEC` | VARCHAR2(40) | SPEC 名稱 |
|
||||
| `SPECSEQUENCE` | NUMBER | SPEC 順序(PJ_SEQUENCE) |
|
||||
| `SPEC_ORDER` | VARCHAR2(200) | 排序欄位(SPECSEQUENCE + '_' + SPEC) |
|
||||
| `WORK_CENTER` | VARCHAR2(100) | 工站名稱 |
|
||||
| `WORK_CENTER_SEQUENCE` | VARCHAR2(40) | 工站順序碼(取自 WORKCENTER.Description) |
|
||||
| `WORK_CENTER_GROUP` | VARCHAR2(100) | 工站分組名稱(依規則合併,如焊接/成型/電鍍) |
|
||||
| `WORKCENTERSEQUENCE_GROUP` | VARCHAR2(40) | 工站群組順序碼(依規則統一) |
|
||||
| `WORKCENTERGROUP_ORDER` | VARCHAR2(200) | 群組排序欄位(序號 + '_' + 群組名) |
|
||||
| `WORK_CENTER_SHORT` | VARCHAR2(40) | 工站簡稱(如 DB/WB/Mold) |
|
||||
|
||||
#### 查詢策略
|
||||
|
||||
**1. SPEC 對應工站分組**
|
||||
```sql
|
||||
SELECT
|
||||
SPEC,
|
||||
WORK_CENTER,
|
||||
WORK_CENTER_GROUP,
|
||||
WORK_CENTER_SHORT,
|
||||
WORKCENTERSEQUENCE_GROUP
|
||||
FROM DWH.DW_MES_SPEC_WORKCENTER_V
|
||||
ORDER BY WORKCENTERSEQUENCE_GROUP, SPEC;
|
||||
```
|
||||
|
||||
**2. 與 WIP 視圖對照(補足工站分組)**
|
||||
```sql
|
||||
SELECT
|
||||
l.LOTID,
|
||||
l.SPECNAME,
|
||||
l.WORKCENTERNAME,
|
||||
s.WORK_CENTER_GROUP,
|
||||
s.WORK_CENTER_SHORT
|
||||
FROM DWH.DW_MES_LOT_V l
|
||||
LEFT JOIN DWH.DW_MES_SPEC_WORKCENTER_V s
|
||||
ON l.SPECNAME = s.SPEC
|
||||
ORDER BY l.WORKCENTERSEQUENCE_GROUP, l.LOTID;
|
||||
```
|
||||
|
||||
#### 重要注意事項
|
||||
|
||||
⚠️ **分組規則**: `WORK_CENTER_GROUP` 與 `WORKCENTERSEQUENCE_GROUP` 由 CASE 規則產生,若工站命名異動需同步檢查
|
||||
|
||||
---
|
||||
|
||||
## 現況快照表分析
|
||||
|
||||
### 1. DW_MES_WIP(在制品表)⭐⭐⭐
|
||||
|
||||
@@ -1196,7 +1429,7 @@ ORDER BY TXNTIMESTAMP DESC;
|
||||
|
||||
#### 重要注意事項
|
||||
|
||||
⚠️ **大數據量表**: 7700萬筆資料,務必加時間條件
|
||||
⚠️ **大數據量表**: 約 7,796 萬筆資料,務必加時間條件
|
||||
|
||||
⚠️ **與LOTWIPHISTORY關聯**: 通過`WIPLOTHISTORYID`關聯
|
||||
|
||||
@@ -1326,7 +1559,7 @@ ORDER BY TOTAL_OUTPUT DESC;
|
||||
|
||||
#### 重要注意事項
|
||||
|
||||
⚠️ **大數據量**: 4800萬筆,必須加時間條件
|
||||
⚠️ **大數據量**: 約 4,865 萬筆,必須加時間條件
|
||||
|
||||
⚠️ **與LOTWIPHISTORY差異**:
|
||||
- HM_LOTMOVEOUT: 只記錄MoveOut事件
|
||||
@@ -2091,9 +2324,9 @@ WHERE RN > 0;
|
||||
|
||||
---
|
||||
|
||||
**文檔版本**: v1.1
|
||||
**最後更新**: 2026-01-27
|
||||
**更新內容**: 新增 DWH.DW_PJ_LOT_V 即時 WIP 視圖詳細分析(70 欄位)
|
||||
**文檔版本**: v1.2
|
||||
**最後更新**: 2026-01-29
|
||||
**更新內容**: DWH 全表掃描更新數據量、補充 DW_MES_SPEC_WORKCENTER_V 工站對照視圖與查詢策略
|
||||
**建議更新週期**: 每季度或表結構變更時
|
||||
|
||||
|
||||
|
||||
1374
docs/MES_Database_Reference.md
Normal file
1374
docs/MES_Database_Reference.md
Normal file
File diff suppressed because it is too large
Load Diff
36
docs/Oracle_Authorized_Objects.md
Normal file
36
docs/Oracle_Authorized_Objects.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# Oracle 可使用 TABLE/VIEW 清單(DWH)
|
||||
|
||||
**產生時間**: 2026-01-29 13:34:22
|
||||
**使用者**: MBU1_R
|
||||
**Schema**: DWH
|
||||
|
||||
## 摘要
|
||||
|
||||
- 可使用物件總數: 19
|
||||
- TABLE: 16
|
||||
- VIEW: 3
|
||||
- 來源 (去重後物件數): DIRECT 19, PUBLIC 0, ROLE 0, SYSTEM 0
|
||||
|
||||
## 物件清單
|
||||
|
||||
| 物件 | 類型 | 權限 | 授權來源 |
|
||||
|------|------|------|----------|
|
||||
| `DWH.DW_MES_CONTAINER` | TABLE | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_EQUIPMENTSTATUS_WIP_V` | VIEW | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_HM_LOTMOVEOUT` | TABLE | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_HOLDRELEASEHISTORY` | TABLE | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_JOB` | TABLE | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_JOBTXNHISTORY` | TABLE | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_LOTMATERIALSHISTORY` | TABLE | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_LOTREJECTHISTORY` | TABLE | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_LOTWIPDATAHISTORY` | TABLE | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_LOTWIPHISTORY` | TABLE | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_LOT_V` | VIEW | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_MAINTENANCE` | TABLE | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_PARTREQUESTORDER` | TABLE | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_PJ_COMBINEDASSYLOTS` | TABLE | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_RESOURCE` | TABLE | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_RESOURCESTATUS` | TABLE | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_RESOURCESTATUS_SHIFT` | TABLE | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_SPEC_WORKCENTER_V` | VIEW | SELECT | DIRECT |
|
||||
| `DWH.DW_MES_WIP` | TABLE | SELECT | DIRECT |
|
||||
162
openspec/changes/archive/2026-01-29-resource-cache/design.md
Normal file
162
openspec/changes/archive/2026-01-29-resource-cache/design.md
Normal file
@@ -0,0 +1,162 @@
|
||||
## Context
|
||||
|
||||
目前系統的 `DW_MES_RESOURCE` 設備主檔資料散佈在多個查詢中:
|
||||
|
||||
1. **filter_cache.py** - 獨立查詢 `RESOURCEFAMILYNAME` 供篩選器使用
|
||||
2. **resource_service.py** - 與 `DW_MES_RESOURCESTATUS` JOIN 查詢即時狀態
|
||||
3. **resource_history_service.py** - 與 `DW_MES_RESOURCESTATUS_SHIFT` JOIN 查詢歷史績效
|
||||
|
||||
每次使用者開啟頁面都需等待 Oracle 查詢,且多個功能重複查詢相同資料。
|
||||
|
||||
系統已有 WIP 快取機制(`cache_updater.py`),使用 Redis 儲存 `DW_PJ_LOT_V` 資料並以 `SYS_DATE` 作為版本識別。Resource 快取可沿用類似架構。
|
||||
|
||||
## Goals / Non-Goals
|
||||
|
||||
**Goals:**
|
||||
- 將篩選後的 `DW_MES_RESOURCE` 全表(78 欄位)快取至 Redis
|
||||
- 每 4 小時自動同步,以 `MAX(LASTCHANGEDATE)` 作為版本識別
|
||||
- 提供統一 API 供各模組取用設備資料和篩選器選項
|
||||
- 整合至現有 `CacheUpdater` 背景任務
|
||||
- Redis 不可用時自動 fallback 到 Oracle
|
||||
|
||||
**Non-Goals:**
|
||||
- 不修改 SQL JOIN 查詢邏輯(仍由 Oracle 執行 JOIN)
|
||||
- 不快取 `DW_MES_RESOURCESTATUS` 或 `DW_MES_RESOURCESTATUS_SHIFT`(變動頻繁)
|
||||
- 不實作即時同步(WebSocket / trigger)
|
||||
|
||||
## Decisions
|
||||
|
||||
### Decision 1: 新增獨立模組 `resource_cache.py`
|
||||
|
||||
**選擇**:建立新模組 `src/mes_dashboard/services/resource_cache.py`
|
||||
|
||||
**替代方案**:
|
||||
- A) 擴充現有 `filter_cache.py` - 但該模組同時處理 WIP 和 Resource,職責不清
|
||||
- B) 合併至 `wip_cache.py` - 但兩者資料來源和同步週期不同
|
||||
|
||||
**理由**:
|
||||
- 職責單一:專門處理 Resource 設備主檔
|
||||
- 易於測試:獨立模組可單獨測試
|
||||
- 擴展性:未來可增加更多 Resource 相關功能
|
||||
|
||||
---
|
||||
|
||||
### Decision 2: 全表快取(78 欄位)
|
||||
|
||||
**選擇**:`SELECT * FROM DW_MES_RESOURCE WHERE <filters>`
|
||||
|
||||
**替代方案**:
|
||||
- A) 只快取常用欄位(~15 欄位)- 約 2-3 MB
|
||||
- B) 全表快取(78 欄位)- 約 10-18 MB
|
||||
|
||||
**理由**:
|
||||
- 避免未來新增需求時需修改快取邏輯
|
||||
- 10-18 MB 對 Redis 仍屬輕量(相較 WIP 可能數十 MB)
|
||||
- 一次載入,多處使用
|
||||
|
||||
---
|
||||
|
||||
### Decision 3: 4 小時同步週期
|
||||
|
||||
**選擇**:`RESOURCE_SYNC_INTERVAL = 14400` 秒(4 小時)
|
||||
|
||||
**替代方案**:
|
||||
- A) 1 小時 - 更即時但增加 Oracle 負載
|
||||
- B) 24 小時 - 負載最低但延遲過長
|
||||
- C) 事件驅動 - 需要額外 trigger 機制
|
||||
|
||||
**理由**:
|
||||
- 設備主檔變動不頻繁(新機台、報廢等)
|
||||
- 4 小時延遲對報表場景可接受
|
||||
- 平衡即時性與資源消耗
|
||||
|
||||
---
|
||||
|
||||
### Decision 4: 使用 `MAX(LASTCHANGEDATE)` 作為版本識別
|
||||
|
||||
**選擇**:查詢 `SELECT MAX(LASTCHANGEDATE) FROM DW_MES_RESOURCE WHERE <filters>`
|
||||
|
||||
**替代方案**:
|
||||
- A) 每次都全表同步 - 簡單但浪費資源
|
||||
- B) COUNT(*) 比對 - 無法偵測更新
|
||||
- C) CHECKSUM - Oracle 不原生支援
|
||||
|
||||
**理由**:
|
||||
- 可偵測新增和更新
|
||||
- 查詢效率高(索引欄位)
|
||||
- 與 WIP 的 `SYS_DATE` 機制類似
|
||||
|
||||
---
|
||||
|
||||
### Decision 5: 整合至現有 CacheUpdater
|
||||
|
||||
**選擇**:擴充 `cache_updater.py`,新增 resource 同步邏輯
|
||||
|
||||
**替代方案**:
|
||||
- A) 獨立背景任務 - 增加維護複雜度
|
||||
- B) 使用 APScheduler - 引入新依賴
|
||||
|
||||
**理由**:
|
||||
- 複用現有架構
|
||||
- 統一快取管理
|
||||
- 減少程式碼重複
|
||||
|
||||
---
|
||||
|
||||
### Decision 6: Python 端篩選 vs Redis 查詢
|
||||
|
||||
**選擇**:資料存為 JSON array,篩選在 Python 端執行
|
||||
|
||||
**替代方案**:
|
||||
- A) 每筆資料獨立 key(`resource:{id}`)- 需多次 Redis 呼叫
|
||||
- B) 使用 Redis Search - 需額外模組
|
||||
- C) JSON array + Python 篩選 - 單次讀取,記憶體運算
|
||||
|
||||
**理由**:
|
||||
- 資料量小(~5000 筆),全載入記憶體無壓力
|
||||
- 單次 Redis 呼叫取得所有資料
|
||||
- 篩選邏輯在 Python 端靈活可控
|
||||
|
||||
## Risks / Trade-offs
|
||||
|
||||
| 風險 | 緩解措施 |
|
||||
|------|----------|
|
||||
| **Redis 不可用** → 篩選器無法載入 | 實作 fallback 到 Oracle 直查;記錄 warning 日誌 |
|
||||
| **新設備 4 小時內不顯示** → 使用者困惑 | 提供手動刷新 API;文件說明延遲機制 |
|
||||
| **記憶體使用增加** → Redis 資源競爭 | 監控記憶體用量;資料壓縮(如需要) |
|
||||
| **NOTES/RESOURCECOMMENTS 大欄位** → 浪費空間 | 實際填充率低(~40%);可接受 |
|
||||
| **JSON 解析效能** → 回應延遲 | 使用 orjson 加速;預估 <50ms |
|
||||
|
||||
## Migration Plan
|
||||
|
||||
### Phase 1: 新增模組(不影響現有功能)
|
||||
1. 建立 `resource_cache.py` 模組
|
||||
2. 實作 Redis 同步邏輯
|
||||
3. 整合至 `CacheUpdater`
|
||||
4. 新增單元測試
|
||||
|
||||
### Phase 2: 整合設備歷史績效
|
||||
1. 修改 `resource_history_service.get_filter_options()`
|
||||
2. 驗證型號篩選器功能
|
||||
|
||||
### Phase 3: 整合機台狀態報表
|
||||
1. 修改 `resource_service.query_resource_filter_options()`
|
||||
2. 驗證所有篩選器功能
|
||||
|
||||
### Phase 4: 清理
|
||||
1. 移除 `filter_cache._load_resource_families()`
|
||||
2. 更新 Health Check 端點
|
||||
|
||||
### Rollback Strategy
|
||||
- 設定 `RESOURCE_CACHE_ENABLED=false` 可立即回退到 Oracle 直查
|
||||
- 無資料遷移,回退無風險
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. **是否需要 ID 索引?**
|
||||
- 目前設計為全表載入 + Python 篩選
|
||||
- 如果未來需要高頻 ID 查詢,可加入 Redis Hash 索引
|
||||
|
||||
2. **是否需要 API 強制刷新?**
|
||||
- 目前只有 `refresh_cache(force=True)` 內部方法
|
||||
- 可考慮新增 `/api/admin/cache/resource/refresh` 端點
|
||||
350
openspec/changes/archive/2026-01-29-resource-cache/proposal.md
Normal file
350
openspec/changes/archive/2026-01-29-resource-cache/proposal.md
Normal file
@@ -0,0 +1,350 @@
|
||||
# 提案:DW_MES_RESOURCE 全表快取至 Redis
|
||||
|
||||
## 背景
|
||||
|
||||
目前系統多處功能需要查詢 `DW_MES_RESOURCE` 表來取得設備主檔資料,主要用於:
|
||||
1. 篩選器下拉選項(型號、站點、部門等)
|
||||
2. 設備資訊 JOIN 查詢
|
||||
|
||||
現況問題:
|
||||
- 每次載入篩選器都需查詢 Oracle
|
||||
- 多個功能重複查詢相同資料
|
||||
- 使用者等待時間較長
|
||||
|
||||
## 目標
|
||||
|
||||
將 `DW_MES_RESOURCE`(套用全域篩選後)**全表全欄位**快取至 Redis,實現:
|
||||
1. **快速篩選器載入** - 使用者開啟頁面時無需等待 Oracle 查詢
|
||||
2. **集中管理** - 統一資料來源,確保各功能使用一致的設備清單
|
||||
3. **降低 Oracle 負載** - 減少重複查詢
|
||||
4. **擴展彈性** - 全欄位快取,未來新功能可直接使用
|
||||
|
||||
## 資料規格
|
||||
|
||||
### 篩選條件(與現有一致)
|
||||
|
||||
```sql
|
||||
WHERE ((OBJECTCATEGORY = 'ASSEMBLY' AND OBJECTTYPE = 'ASSEMBLY')
|
||||
OR (OBJECTCATEGORY = 'WAFERSORT' AND OBJECTTYPE = 'WAFERSORT'))
|
||||
AND (LOCATIONNAME IS NULL OR LOCATIONNAME NOT IN (
|
||||
'ATEC', 'F區', 'F區焊接站', '報廢', '實驗室',
|
||||
'山東', '成型站_F區', '焊接F區', '無錫', '熒茂'))
|
||||
AND (PJ_ASSETSSTATUS IS NULL OR PJ_ASSETSSTATUS NOT IN ('Disapproved'))
|
||||
```
|
||||
|
||||
### 快取欄位(全表 78 欄位)
|
||||
|
||||
```
|
||||
SELECT * FROM DW_MES_RESOURCE WHERE <篩選條件>
|
||||
```
|
||||
|
||||
完整欄位清單:
|
||||
|
||||
| 類別 | 欄位 |
|
||||
|------|------|
|
||||
| **識別** | RESOURCEID, RESOURCENAME, DESCRIPTION |
|
||||
| **分類** | OBJECTCATEGORY, OBJECTTYPE, EQUIPMENTTYPE, RESOURCEFAMILYID, RESOURCEFAMILYNAME |
|
||||
| **組織** | FACTORYID, LOCATIONID, LOCATIONNAME, WORKCENTERNAME, PJ_WORKCENTERID, PJ_WORKCENTER_ID, PJ_DEPARTMENT |
|
||||
| **供應商** | VENDORID, VENDORNAME, VENDORMODEL, VENDORSERIALNUMBER, PJ_ERPVENDORID |
|
||||
| **狀態旗標** | PJ_ASSETSSTATUS, PJ_ISPRODUCTION, PJ_ISKEY, PJ_ISMONITOR, PJ_ISAUEQUIPMENT |
|
||||
| **自動化** | AUTOMATIONPLANID, AUTOMATIONPLANNAME, PJ_AUTOMATIONLEVEL, PJ_AUEQUIPMENTGROUPID |
|
||||
| **製程** | RECIPEID, BOMID, BOMBASEID, PACKAGEGROUPID, TOOLPLANID |
|
||||
| **SPC** | SPCSETUPID, PJ_SPCSETUP, USESPCMATRIX, PJ_VERIFYSPCRESULT |
|
||||
| **檢查** | PJ_CHECKBYHOUR, PJ_CHECKBYIDLETIME, PJ_CHECKBYLOT, PJ_CHECKBYPRODUCT, PJ_CHECKBYTYPE, PJ_CHECKBYWORKORDER |
|
||||
| **產品** | PJ_DATECODE1, PJ_DATECODE2, PJ_FINISHEDPRODUCT, PJ_WAFERPRODUCT, PJ_PROCESSSPEC, PJ_WORKORDER |
|
||||
| **容量** | LOTCOUNT, MAXLOTS, MAXUNITS, MULTILOTSFLAG |
|
||||
| **其他** | CONTAINERID, DOCUMENTSETID, MACHINEGROUPID, MAINTENANCECLASSID, NOTES, RESOURCECOMMENTS, PJ_OWNER, PJ_EMPLOYEE, PJ_LOTID, PJ_CONTROLLENGTH |
|
||||
| **關聯** | PARENTRESOURCEID, PRODUCTIONSTATUSID, STATUSMODELID, SETUPACCESSID, PJ_SETUPACCESSID, PARAMLISTID, SUBEQUIPMENTLOGICALID, TRAININGREQGROUPID, UOMID, WIPMSGDEFMGRID |
|
||||
| **審計** | CREATIONDATE, CREATIONUSERNAME, LASTCHANGEDATE, USERID |
|
||||
|
||||
### 資料量估算
|
||||
|
||||
| 項目 | 數值 |
|
||||
|------|------|
|
||||
| 原始總筆數 | 90,620 |
|
||||
| 篩選後估計 | ~3,000 - 5,000 |
|
||||
| 欄位數 | 78 |
|
||||
| 最大單筆大小 | ~6,045 bytes |
|
||||
| 平均單筆大小 (含 JSON) | ~3,600 bytes |
|
||||
| **總快取大小** | **~10 - 18 MB** |
|
||||
|
||||
### 同步策略
|
||||
|
||||
| 項目 | 設定 |
|
||||
|------|------|
|
||||
| 同步週期 | 每 4 小時 |
|
||||
| 初始載入 | 應用啟動時 |
|
||||
| 版本識別 | `MAX(LASTCHANGEDATE)` |
|
||||
| 失效策略 | 查詢失敗時 fallback 到 Oracle |
|
||||
|
||||
### Redis Key 結構
|
||||
|
||||
```
|
||||
{prefix}:resource:data - 完整資料 (JSON array)
|
||||
{prefix}:resource:meta:version - MAX(LASTCHANGEDATE)
|
||||
{prefix}:resource:meta:updated - 同步時間 (ISO 8601)
|
||||
{prefix}:resource:meta:count - 記錄筆數
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 影響盤點
|
||||
|
||||
### 1. 設備歷史績效 (Resource History)
|
||||
|
||||
**檔案**: `resource_history_service.py`, `resource_history_routes.py`
|
||||
|
||||
| 功能 | 現況 | 改動後 |
|
||||
|------|------|--------|
|
||||
| 型號篩選器 (families) | `filter_cache.get_resource_families()` 查 Oracle | 改從 Redis 取 |
|
||||
| 站點篩選器 (workcenter_groups) | `filter_cache.get_workcenter_groups()` 查 DW_PJ_LOT_V | 不變 (來源不同) |
|
||||
| `/api/resource/history/options` | 呼叫 filter_cache | 改用新的 resource_cache |
|
||||
|
||||
**前端**: `resource_history.html`
|
||||
- `familiesDropdown` - 型號多選下拉
|
||||
|
||||
### 2. 機台狀態報表 (Resource Status)
|
||||
|
||||
**檔案**: `resource_service.py`, `resource_routes.py`
|
||||
|
||||
| 功能 | 現況 | 改動後 |
|
||||
|------|------|------|
|
||||
| `query_resource_filter_options()` | 直接查 Oracle + JOIN status | 改從 Redis 取維度欄位 |
|
||||
| `/resource/filter_options` | 查 Oracle,結果快取 60 秒 | 改用 Redis 快取 |
|
||||
|
||||
**前端**: `resource.html`
|
||||
- 站點篩選 (workcenter)
|
||||
- 狀態篩選 (status) - **不變**,需即時資料
|
||||
- 型號篩選 (family)
|
||||
- 部門篩選 (department)
|
||||
|
||||
### 3. Filter Cache 模組
|
||||
|
||||
**檔案**: `filter_cache.py`
|
||||
|
||||
| 功能 | 現況 | 改動後 |
|
||||
|------|------|--------|
|
||||
| `get_resource_families()` | 獨立查 DW_MES_RESOURCE | 廢棄,改用 resource_cache |
|
||||
| `_load_resource_families()` | 內部載入函數 | 廢棄 |
|
||||
| `get_workcenter_groups()` | 查 DW_PJ_LOT_V | 不變 |
|
||||
| `get_workcenter_mapping()` | 查 DW_PJ_LOT_V | 不變 |
|
||||
|
||||
### 4. Dashboard Service
|
||||
|
||||
**檔案**: `dashboard_service.py`
|
||||
|
||||
| 功能 | 現況 | 改動後 |
|
||||
|------|------|--------|
|
||||
| UDT/SDT drill-down | JOIN DW_MES_RESOURCE | 考慮改用 Python 端 JOIN |
|
||||
|
||||
---
|
||||
|
||||
## 新增模組設計
|
||||
|
||||
### resource_cache.py (新增)
|
||||
|
||||
```python
|
||||
"""Resource Cache - DW_MES_RESOURCE 全表快取模組
|
||||
|
||||
全表快取套用全域篩選後的設備主檔資料至 Redis。
|
||||
"""
|
||||
|
||||
# ============================================================
|
||||
# 主要查詢 API
|
||||
# ============================================================
|
||||
|
||||
def get_all_resources() -> List[Dict]:
|
||||
"""取得所有快取中的設備資料(全欄位)"""
|
||||
|
||||
def get_resource_by_id(resource_id: str) -> Optional[Dict]:
|
||||
"""依 RESOURCEID 取得單筆設備資料"""
|
||||
|
||||
def get_resources_by_ids(resource_ids: List[str]) -> List[Dict]:
|
||||
"""依 RESOURCEID 清單批次取得設備資料"""
|
||||
|
||||
def get_resources_by_filter(
|
||||
workcenters: List[str] = None,
|
||||
families: List[str] = None,
|
||||
departments: List[str] = None,
|
||||
locations: List[str] = None,
|
||||
is_production: bool = None,
|
||||
is_key: bool = None,
|
||||
is_monitor: bool = None,
|
||||
) -> List[Dict]:
|
||||
"""依條件篩選設備資料(在 Python 端篩選)"""
|
||||
|
||||
# ============================================================
|
||||
# 篩選器選項 API(取代現有 filter_cache 功能)
|
||||
# ============================================================
|
||||
|
||||
def get_distinct_values(column: str) -> List[str]:
|
||||
"""取得指定欄位的唯一值清單(排序後)
|
||||
|
||||
常用欄位:
|
||||
- RESOURCEFAMILYNAME (型號)
|
||||
- WORKCENTERNAME (站點)
|
||||
- PJ_DEPARTMENT (部門)
|
||||
- LOCATIONNAME (區域)
|
||||
- VENDORNAME (供應商)
|
||||
- PJ_ASSETSSTATUS (資產狀態)
|
||||
"""
|
||||
|
||||
def get_resource_families() -> List[str]:
|
||||
"""取得型號清單(便捷方法)"""
|
||||
return get_distinct_values('RESOURCEFAMILYNAME')
|
||||
|
||||
def get_workcenters() -> List[str]:
|
||||
"""取得站點清單(便捷方法)"""
|
||||
return get_distinct_values('WORKCENTERNAME')
|
||||
|
||||
def get_departments() -> List[str]:
|
||||
"""取得部門清單(便捷方法)"""
|
||||
return get_distinct_values('PJ_DEPARTMENT')
|
||||
|
||||
# ============================================================
|
||||
# 快取管理 API
|
||||
# ============================================================
|
||||
|
||||
def get_cache_status() -> Dict:
|
||||
"""取得快取狀態
|
||||
|
||||
Returns:
|
||||
{
|
||||
'enabled': True,
|
||||
'loaded': True,
|
||||
'count': 3500,
|
||||
'version': '2026-01-29 10:30:00', # MAX(LASTCHANGEDATE)
|
||||
'updated_at': '2026-01-29 14:00:00',
|
||||
'size_bytes': 12582912,
|
||||
}
|
||||
"""
|
||||
|
||||
def refresh_cache(force: bool = False) -> bool:
|
||||
"""手動刷新快取
|
||||
|
||||
Args:
|
||||
force: 強制刷新,忽略版本檢查
|
||||
"""
|
||||
|
||||
def init_cache():
|
||||
"""初始化快取(應用啟動時呼叫)"""
|
||||
|
||||
# ============================================================
|
||||
# 內部函數
|
||||
# ============================================================
|
||||
|
||||
def _load_from_oracle() -> pd.DataFrame:
|
||||
"""從 Oracle 載入全表資料(套用全域篩選)"""
|
||||
|
||||
def _sync_to_redis(df: pd.DataFrame) -> bool:
|
||||
"""同步至 Redis(使用 pipeline 確保原子性)"""
|
||||
|
||||
def _get_version_from_oracle() -> Optional[str]:
|
||||
"""取得 Oracle 資料版本(MAX(LASTCHANGEDATE))"""
|
||||
|
||||
def _get_version_from_redis() -> Optional[str]:
|
||||
"""取得 Redis 快取版本"""
|
||||
```
|
||||
|
||||
### Redis Key 結構
|
||||
|
||||
```
|
||||
{prefix}:resource:data - 完整資料 (JSON array, 全欄位)
|
||||
{prefix}:resource:index:id - ID 索引 (Hash: RESOURCEID -> row_index)
|
||||
{prefix}:resource:meta:version - 資料版本 MAX(LASTCHANGEDATE)
|
||||
{prefix}:resource:meta:updated - 同步時間 (ISO 8601)
|
||||
{prefix}:resource:meta:count - 記錄筆數
|
||||
```
|
||||
|
||||
### 背景同步任務
|
||||
|
||||
整合至現有 `CacheUpdater`,新增 resource 同步邏輯:
|
||||
|
||||
```python
|
||||
class CacheUpdater:
|
||||
def __init__(self):
|
||||
self.wip_check_interval = 600 # 10 分鐘 (現有 WIP)
|
||||
self.resource_sync_interval = 14400 # 4 小時 (新增 Resource)
|
||||
self._last_resource_sync = None
|
||||
|
||||
def _check_resource_update(self):
|
||||
"""檢查並同步 Resource 快取
|
||||
|
||||
1. 檢查距上次同步是否超過 4 小時
|
||||
2. 比對 Oracle MAX(LASTCHANGEDATE) 與 Redis 版本
|
||||
3. 若有差異則全表同步
|
||||
"""
|
||||
```
|
||||
|
||||
### 環境變數
|
||||
|
||||
| 變數 | 預設值 | 說明 |
|
||||
|------|--------|------|
|
||||
| `RESOURCE_CACHE_ENABLED` | `true` | 是否啟用 Resource 快取 |
|
||||
| `RESOURCE_SYNC_INTERVAL` | `14400` | 同步間隔(秒),預設 4 小時 |
|
||||
|
||||
---
|
||||
|
||||
## 遷移計畫
|
||||
|
||||
### Phase 1: 新增 resource_cache 模組
|
||||
- 建立 `resource_cache.py`
|
||||
- 實作 Redis 同步邏輯
|
||||
- 加入背景同步任務
|
||||
- 新增 health check 項目
|
||||
|
||||
### Phase 2: 整合設備歷史績效
|
||||
- 修改 `resource_history_service.get_filter_options()`
|
||||
- 更新 `/api/resource/history/options` 端點
|
||||
- 測試型號篩選器
|
||||
|
||||
### Phase 3: 整合機台狀態報表
|
||||
- 修改 `query_resource_filter_options()`
|
||||
- 更新 `/resource/filter_options` 端點
|
||||
- 測試所有篩選器
|
||||
|
||||
### Phase 4: 清理
|
||||
- 移除 `filter_cache._load_resource_families()`
|
||||
- 移除 `filter_cache.get_resource_families()`
|
||||
- 更新文件
|
||||
|
||||
---
|
||||
|
||||
## 測試重點
|
||||
|
||||
1. **快取載入**
|
||||
- 應用啟動時成功載入
|
||||
- Redis 不可用時 fallback 到 Oracle
|
||||
|
||||
2. **篩選器功能**
|
||||
- 設備歷史績效 - 型號下拉正確顯示
|
||||
- 機台狀態報表 - 所有篩選器正確顯示
|
||||
|
||||
3. **同步機制**
|
||||
- 4 小時自動同步
|
||||
- 手動強制同步
|
||||
|
||||
4. **效能**
|
||||
- 篩選器載入時間 < 100ms
|
||||
- 記憶體使用量符合預期
|
||||
|
||||
---
|
||||
|
||||
## 風險評估
|
||||
|
||||
| 風險 | 影響 | 緩解措施 |
|
||||
|------|------|----------|
|
||||
| Redis 不可用 | 篩選器無法載入 | Fallback 到 Oracle 直查 |
|
||||
| 資料不一致 | 新設備 4 小時內不顯示 | 可手動觸發同步、提供 API 強制刷新 |
|
||||
| 記憶體使用 | 快取約 10-18 MB | 相較 WIP 快取(可能數十 MB)仍屬輕量 |
|
||||
| 大欄位浪費 | NOTES/RESOURCECOMMENTS 最大 2000 chars | 實際使用率低,平均填充率 ~40% |
|
||||
|
||||
---
|
||||
|
||||
## 預期效益
|
||||
|
||||
1. **使用者體驗** - 篩選器載入從 ~500ms 降至 <100ms
|
||||
2. **Oracle 負載** - 減少設備主檔查詢次數約 80%
|
||||
3. **維護性** - 集中管理設備資料來源
|
||||
4. **擴展性** - 未來可支援更多維度查詢
|
||||
|
||||
@@ -0,0 +1,180 @@
|
||||
## ADDED Requirements
|
||||
|
||||
### Requirement: Resource Cache Data Storage
|
||||
|
||||
系統 SHALL 將 `DW_MES_RESOURCE` 全表資料(套用全域篩選後)以 JSON 格式儲存於 Redis。
|
||||
|
||||
#### Scenario: Data stored with correct keys
|
||||
- **WHEN** 快取同步完成後
|
||||
- **THEN** Redis SHALL 包含以下 keys:
|
||||
- `{prefix}:resource:data` - 完整表資料(JSON 陣列,包含全部 78 欄位)
|
||||
- `{prefix}:resource:meta:version` - Oracle 資料的 `MAX(LASTCHANGEDATE)`
|
||||
- `{prefix}:resource:meta:updated` - 快取更新時間(ISO 8601 格式)
|
||||
- `{prefix}:resource:meta:count` - 記錄筆數
|
||||
|
||||
#### Scenario: Global filters applied
|
||||
- **WHEN** 從 Oracle 載入資料時
|
||||
- **THEN** 系統 SHALL 套用以下篩選條件:
|
||||
- 設備類型:`(OBJECTCATEGORY = 'ASSEMBLY' AND OBJECTTYPE = 'ASSEMBLY') OR (OBJECTCATEGORY = 'WAFERSORT' AND OBJECTTYPE = 'WAFERSORT')`
|
||||
- 排除地點:`LOCATIONNAME NOT IN ('ATEC', 'F區', 'F區焊接站', '報廢', '實驗室', '山東', '成型站_F區', '焊接F區', '無錫', '熒茂')`
|
||||
- 排除資產狀態:`PJ_ASSETSSTATUS NOT IN ('Disapproved')`
|
||||
|
||||
#### Scenario: Atomic update with pipeline
|
||||
- **WHEN** 快取同步執行時
|
||||
- **THEN** 系統 SHALL 使用 Redis pipeline 確保所有 keys 原子更新
|
||||
|
||||
---
|
||||
|
||||
### Requirement: Resource Cache Background Sync
|
||||
|
||||
系統 SHALL 提供背景任務,定期同步 `DW_MES_RESOURCE` 至 Redis 快取。
|
||||
|
||||
#### Scenario: Periodic sync at configured interval
|
||||
- **WHEN** 應用程式啟動後
|
||||
- **THEN** 背景任務 SHALL 每 `RESOURCE_SYNC_INTERVAL` 秒(預設 14400 秒 = 4 小時)檢查是否需要同步
|
||||
|
||||
#### Scenario: Version check triggers sync
|
||||
- **WHEN** 背景任務執行時,Oracle 的 `MAX(LASTCHANGEDATE)` 與 Redis 中儲存的版本不同
|
||||
- **THEN** 系統 SHALL 執行全表同步
|
||||
- **AND** 更新 `{prefix}:resource:meta:version` 為新的版本
|
||||
- **AND** 更新 `{prefix}:resource:meta:updated` 為當前時間
|
||||
|
||||
#### Scenario: Version unchanged skips sync
|
||||
- **WHEN** 背景任務執行時,Oracle 的 `MAX(LASTCHANGEDATE)` 與 Redis 中儲存的版本相同
|
||||
- **THEN** 系統 SHALL 跳過同步
|
||||
- **AND** 記錄 debug 日誌
|
||||
|
||||
#### Scenario: Initial cache load on startup
|
||||
- **WHEN** 應用程式啟動時 Redis 中無 resource 快取資料
|
||||
- **THEN** 系統 SHALL 立即執行一次快取同步
|
||||
|
||||
#### Scenario: Force refresh ignores version check
|
||||
- **WHEN** 呼叫 `refresh_cache(force=True)`
|
||||
- **THEN** 系統 SHALL 執行全表同步,不論版本是否相同
|
||||
|
||||
---
|
||||
|
||||
### Requirement: Resource Cache Query API
|
||||
|
||||
系統 SHALL 提供 API 從 Redis 快取查詢設備資料。
|
||||
|
||||
#### Scenario: Get all resources
|
||||
- **WHEN** 呼叫 `get_all_resources()`
|
||||
- **THEN** 系統 SHALL 回傳快取中所有設備資料(List[Dict],包含全部 78 欄位)
|
||||
|
||||
#### Scenario: Get resource by ID
|
||||
- **WHEN** 呼叫 `get_resource_by_id(resource_id)`
|
||||
- **THEN** 系統 SHALL 回傳對應的設備資料(Dict)
|
||||
- **AND** 若 ID 不存在則回傳 `None`
|
||||
|
||||
#### Scenario: Get resources by multiple IDs
|
||||
- **WHEN** 呼叫 `get_resources_by_ids(resource_ids)`
|
||||
- **THEN** 系統 SHALL 回傳所有匹配的設備資料(List[Dict])
|
||||
- **AND** 不存在的 ID 不會出現在結果中
|
||||
|
||||
#### Scenario: Get resources by filter
|
||||
- **WHEN** 呼叫 `get_resources_by_filter(workcenters=['焊接_DB'], is_production=True)`
|
||||
- **THEN** 系統 SHALL 在 Python 端篩選快取資料
|
||||
- **AND** 回傳符合所有條件的設備清單
|
||||
|
||||
---
|
||||
|
||||
### Requirement: Resource Cache Distinct Values API
|
||||
|
||||
系統 SHALL 提供 API 取得設備欄位的唯一值清單,供篩選器使用。
|
||||
|
||||
#### Scenario: Get distinct values for column
|
||||
- **WHEN** 呼叫 `get_distinct_values('RESOURCEFAMILYNAME')`
|
||||
- **THEN** 系統 SHALL 回傳該欄位的唯一值清單(排序後)
|
||||
- **AND** 自動過濾 `None` 和空字串
|
||||
|
||||
#### Scenario: Convenience methods for common columns
|
||||
- **WHEN** 呼叫 `get_resource_families()`
|
||||
- **THEN** 系統 SHALL 回傳 `RESOURCEFAMILYNAME` 欄位的唯一值清單
|
||||
- **AND** `get_workcenters()` 回傳 `WORKCENTERNAME` 唯一值
|
||||
- **AND** `get_departments()` 回傳 `PJ_DEPARTMENT` 唯一值
|
||||
|
||||
---
|
||||
|
||||
### Requirement: Resource Cache Status API
|
||||
|
||||
系統 SHALL 提供 API 查詢快取狀態。
|
||||
|
||||
#### Scenario: Get cache status
|
||||
- **WHEN** 呼叫 `get_cache_status()`
|
||||
- **THEN** 系統 SHALL 回傳包含以下欄位的 Dict:
|
||||
- `enabled`: 快取是否啟用
|
||||
- `loaded`: 快取是否已載入
|
||||
- `count`: 快取記錄數
|
||||
- `version`: 資料版本(MAX(LASTCHANGEDATE))
|
||||
- `updated_at`: 最後同步時間
|
||||
|
||||
#### Scenario: Status when cache not loaded
|
||||
- **WHEN** 呼叫 `get_cache_status()` 且快取尚未載入
|
||||
- **THEN** `loaded` SHALL 為 `false`
|
||||
- **AND** `count` SHALL 為 `0`
|
||||
|
||||
---
|
||||
|
||||
### Requirement: Resource Cache Fallback
|
||||
|
||||
當 Redis 不可用時,系統 SHALL 自動降級到直接查詢 Oracle。
|
||||
|
||||
#### Scenario: Redis unavailable triggers fallback
|
||||
- **WHEN** Redis 連線失敗或超時
|
||||
- **THEN** 系統 SHALL 直接查詢 Oracle `DW_MES_RESOURCE`
|
||||
- **AND** 記錄 warning 日誌
|
||||
|
||||
#### Scenario: Cache empty triggers fallback
|
||||
- **WHEN** Redis 可用但 `{prefix}:resource:data` 不存在或為空
|
||||
- **THEN** 系統 SHALL 直接查詢 Oracle `DW_MES_RESOURCE`
|
||||
|
||||
#### Scenario: RESOURCE_CACHE_ENABLED=false disables cache
|
||||
- **WHEN** 環境變數 `RESOURCE_CACHE_ENABLED` 設為 `false`
|
||||
- **THEN** 系統 SHALL 完全跳過 Redis,直接查詢 Oracle
|
||||
- **AND** 背景同步任務 SHALL 不啟動
|
||||
|
||||
---
|
||||
|
||||
### Requirement: Resource Cache Configuration
|
||||
|
||||
系統 SHALL 支援透過環境變數配置快取行為。
|
||||
|
||||
#### Scenario: Custom sync interval
|
||||
- **WHEN** 環境變數 `RESOURCE_SYNC_INTERVAL` 設為 `7200`
|
||||
- **THEN** 背景任務 SHALL 每 7200 秒(2 小時)執行一次
|
||||
|
||||
#### Scenario: Default configuration
|
||||
- **WHEN** 環境變數未設定
|
||||
- **THEN** 系統 SHALL 使用預設值:
|
||||
- `RESOURCE_CACHE_ENABLED`: `true`
|
||||
- `RESOURCE_SYNC_INTERVAL`: `14400`(4 小時)
|
||||
|
||||
#### Scenario: Key prefix from environment
|
||||
- **WHEN** 環境變數 `REDIS_KEY_PREFIX` 設為 `my_app`
|
||||
- **THEN** 所有 resource 快取 keys SHALL 使用 `my_app:resource:*` 前綴
|
||||
|
||||
---
|
||||
|
||||
### Requirement: Health Check Integration
|
||||
|
||||
健康檢查 SHALL 包含 Resource 快取狀態。
|
||||
|
||||
#### Scenario: Resource cache status in health check
|
||||
- **WHEN** 呼叫 `GET /health` 且 resource 快取可用
|
||||
- **THEN** 回應 body SHALL 包含 `resource_cache` 區塊:
|
||||
```json
|
||||
{
|
||||
"resource_cache": {
|
||||
"enabled": true,
|
||||
"loaded": true,
|
||||
"count": 3500,
|
||||
"version": "2026-01-29 10:30:00",
|
||||
"updated_at": "2026-01-29 14:00:00"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Scenario: Resource cache not loaded warning
|
||||
- **WHEN** 呼叫 `GET /health` 且 resource 快取啟用但未載入
|
||||
- **THEN** 回應 body 的 `warnings` SHALL 包含 "Resource cache not loaded"
|
||||
75
openspec/changes/archive/2026-01-29-resource-cache/tasks.md
Normal file
75
openspec/changes/archive/2026-01-29-resource-cache/tasks.md
Normal file
@@ -0,0 +1,75 @@
|
||||
## 1. 新增 Resource Cache 模組
|
||||
|
||||
- [x] 1.1 建立 `src/mes_dashboard/services/resource_cache.py` 模組骨架
|
||||
- [x] 1.2 實作 `_load_from_oracle()` - 從 Oracle 載入全表資料(套用全域篩選)
|
||||
- [x] 1.3 實作 `_sync_to_redis()` - 使用 pipeline 原子寫入 Redis
|
||||
- [x] 1.4 實作 `_get_version_from_oracle()` - 查詢 `MAX(LASTCHANGEDATE)`
|
||||
- [x] 1.5 實作 `_get_version_from_redis()` - 讀取 Redis 版本
|
||||
- [x] 1.6 實作 `refresh_cache(force)` - 版本比對與同步邏輯
|
||||
- [x] 1.7 實作 `init_cache()` - 初始化載入
|
||||
|
||||
## 2. 實作查詢 API
|
||||
|
||||
- [x] 2.1 實作 `get_all_resources()` - 取得所有快取資料
|
||||
- [x] 2.2 實作 `get_resource_by_id()` - 依 ID 取得單筆
|
||||
- [x] 2.3 實作 `get_resources_by_ids()` - 批次取得
|
||||
- [x] 2.4 實作 `get_resources_by_filter()` - Python 端篩選
|
||||
|
||||
## 3. 實作篩選器選項 API
|
||||
|
||||
- [x] 3.1 實作 `get_distinct_values(column)` - 取得欄位唯一值
|
||||
- [x] 3.2 實作 `get_resource_families()` - 型號清單便捷方法
|
||||
- [x] 3.3 實作 `get_workcenters()` - 站點清單便捷方法
|
||||
- [x] 3.4 實作 `get_departments()` - 部門清單便捷方法
|
||||
|
||||
## 4. 實作快取狀態 API
|
||||
|
||||
- [x] 4.1 實作 `get_cache_status()` - 回傳快取狀態資訊
|
||||
- [x] 4.2 實作 Oracle fallback 邏輯(Redis 不可用時)
|
||||
|
||||
## 5. 整合背景同步任務
|
||||
|
||||
- [x] 5.1 修改 `cache_updater.py` 加入 resource 同步間隔配置
|
||||
- [x] 5.2 實作 `_check_resource_update()` 方法
|
||||
- [x] 5.3 在主迴圈加入 resource 同步檢查
|
||||
- [x] 5.4 啟動時觸發初始同步
|
||||
|
||||
## 6. 環境變數配置
|
||||
|
||||
- [x] 6.1 新增 `RESOURCE_CACHE_ENABLED` 環境變數支援
|
||||
- [x] 6.2 新增 `RESOURCE_SYNC_INTERVAL` 環境變數支援
|
||||
- [x] 6.3 更新 `.env.example` 範例
|
||||
|
||||
## 7. 整合設備歷史績效
|
||||
|
||||
- [x] 7.1 修改 `resource_history_service.get_filter_options()` 使用 resource_cache
|
||||
- [x] 7.2 驗證 `/api/resource/history/options` 端點正常運作 (18 workcenter_groups, 152 families)
|
||||
- [x] 7.3 驗證前端 `familiesDropdown` 型號篩選器 (用戶確認可用)
|
||||
|
||||
## 8. 整合機台狀態報表
|
||||
|
||||
- [x] 8.1 修改 `resource_service.query_resource_filter_options()` 使用 resource_cache
|
||||
- [x] 8.2 驗證 `/resource/filter_options` 端點正常運作 (18 workcenters, 152 families, 6 statuses)
|
||||
- [x] 8.3 驗證前端所有篩選器(站點、型號、部門)(用戶確認可用)
|
||||
|
||||
## 9. 清理舊程式碼
|
||||
|
||||
- [x] 9.1 移除 `filter_cache.get_resource_families()` 函數
|
||||
- [x] 9.2 移除 `filter_cache._load_resource_families()` 函數
|
||||
- [x] 9.3 更新相關 import 語句
|
||||
|
||||
## 10. Health Check 整合
|
||||
|
||||
- [x] 10.1 修改 `/health` 端點加入 `resource_cache` 狀態
|
||||
- [x] 10.2 快取未載入時加入 warning 訊息
|
||||
- [x] 10.3 更新前端 health popup 顯示 resource cache 狀態(可選)
|
||||
|
||||
## 11. 測試
|
||||
|
||||
- [x] 11.1 新增 `tests/test_resource_cache.py` 單元測試 (28 tests passed)
|
||||
- [x] 11.2 測試 Redis 同步邏輯
|
||||
- [x] 11.3 測試查詢 API
|
||||
- [x] 11.4 測試 fallback 機制
|
||||
- [x] 11.5 測試環境變數配置
|
||||
- [x] 11.6 執行整合測試驗證篩選器功能 (15 tests passed)
|
||||
- [x] 11.7 新增 `tests/e2e/test_resource_cache_e2e.py` E2E 測試 (待服務重啟後驗證)
|
||||
180
openspec/specs/resource-cache/spec.md
Normal file
180
openspec/specs/resource-cache/spec.md
Normal file
@@ -0,0 +1,180 @@
|
||||
## ADDED Requirements
|
||||
|
||||
### Requirement: Resource Cache Data Storage
|
||||
|
||||
系統 SHALL 將 `DW_MES_RESOURCE` 全表資料(套用全域篩選後)以 JSON 格式儲存於 Redis。
|
||||
|
||||
#### Scenario: Data stored with correct keys
|
||||
- **WHEN** 快取同步完成後
|
||||
- **THEN** Redis SHALL 包含以下 keys:
|
||||
- `{prefix}:resource:data` - 完整表資料(JSON 陣列,包含全部 78 欄位)
|
||||
- `{prefix}:resource:meta:version` - Oracle 資料的 `MAX(LASTCHANGEDATE)`
|
||||
- `{prefix}:resource:meta:updated` - 快取更新時間(ISO 8601 格式)
|
||||
- `{prefix}:resource:meta:count` - 記錄筆數
|
||||
|
||||
#### Scenario: Global filters applied
|
||||
- **WHEN** 從 Oracle 載入資料時
|
||||
- **THEN** 系統 SHALL 套用以下篩選條件:
|
||||
- 設備類型:`(OBJECTCATEGORY = 'ASSEMBLY' AND OBJECTTYPE = 'ASSEMBLY') OR (OBJECTCATEGORY = 'WAFERSORT' AND OBJECTTYPE = 'WAFERSORT')`
|
||||
- 排除地點:`LOCATIONNAME NOT IN ('ATEC', 'F區', 'F區焊接站', '報廢', '實驗室', '山東', '成型站_F區', '焊接F區', '無錫', '熒茂')`
|
||||
- 排除資產狀態:`PJ_ASSETSSTATUS NOT IN ('Disapproved')`
|
||||
|
||||
#### Scenario: Atomic update with pipeline
|
||||
- **WHEN** 快取同步執行時
|
||||
- **THEN** 系統 SHALL 使用 Redis pipeline 確保所有 keys 原子更新
|
||||
|
||||
---
|
||||
|
||||
### Requirement: Resource Cache Background Sync
|
||||
|
||||
系統 SHALL 提供背景任務,定期同步 `DW_MES_RESOURCE` 至 Redis 快取。
|
||||
|
||||
#### Scenario: Periodic sync at configured interval
|
||||
- **WHEN** 應用程式啟動後
|
||||
- **THEN** 背景任務 SHALL 每 `RESOURCE_SYNC_INTERVAL` 秒(預設 14400 秒 = 4 小時)檢查是否需要同步
|
||||
|
||||
#### Scenario: Version check triggers sync
|
||||
- **WHEN** 背景任務執行時,Oracle 的 `MAX(LASTCHANGEDATE)` 與 Redis 中儲存的版本不同
|
||||
- **THEN** 系統 SHALL 執行全表同步
|
||||
- **AND** 更新 `{prefix}:resource:meta:version` 為新的版本
|
||||
- **AND** 更新 `{prefix}:resource:meta:updated` 為當前時間
|
||||
|
||||
#### Scenario: Version unchanged skips sync
|
||||
- **WHEN** 背景任務執行時,Oracle 的 `MAX(LASTCHANGEDATE)` 與 Redis 中儲存的版本相同
|
||||
- **THEN** 系統 SHALL 跳過同步
|
||||
- **AND** 記錄 debug 日誌
|
||||
|
||||
#### Scenario: Initial cache load on startup
|
||||
- **WHEN** 應用程式啟動時 Redis 中無 resource 快取資料
|
||||
- **THEN** 系統 SHALL 立即執行一次快取同步
|
||||
|
||||
#### Scenario: Force refresh ignores version check
|
||||
- **WHEN** 呼叫 `refresh_cache(force=True)`
|
||||
- **THEN** 系統 SHALL 執行全表同步,不論版本是否相同
|
||||
|
||||
---
|
||||
|
||||
### Requirement: Resource Cache Query API
|
||||
|
||||
系統 SHALL 提供 API 從 Redis 快取查詢設備資料。
|
||||
|
||||
#### Scenario: Get all resources
|
||||
- **WHEN** 呼叫 `get_all_resources()`
|
||||
- **THEN** 系統 SHALL 回傳快取中所有設備資料(List[Dict],包含全部 78 欄位)
|
||||
|
||||
#### Scenario: Get resource by ID
|
||||
- **WHEN** 呼叫 `get_resource_by_id(resource_id)`
|
||||
- **THEN** 系統 SHALL 回傳對應的設備資料(Dict)
|
||||
- **AND** 若 ID 不存在則回傳 `None`
|
||||
|
||||
#### Scenario: Get resources by multiple IDs
|
||||
- **WHEN** 呼叫 `get_resources_by_ids(resource_ids)`
|
||||
- **THEN** 系統 SHALL 回傳所有匹配的設備資料(List[Dict])
|
||||
- **AND** 不存在的 ID 不會出現在結果中
|
||||
|
||||
#### Scenario: Get resources by filter
|
||||
- **WHEN** 呼叫 `get_resources_by_filter(workcenters=['焊接_DB'], is_production=True)`
|
||||
- **THEN** 系統 SHALL 在 Python 端篩選快取資料
|
||||
- **AND** 回傳符合所有條件的設備清單
|
||||
|
||||
---
|
||||
|
||||
### Requirement: Resource Cache Distinct Values API
|
||||
|
||||
系統 SHALL 提供 API 取得設備欄位的唯一值清單,供篩選器使用。
|
||||
|
||||
#### Scenario: Get distinct values for column
|
||||
- **WHEN** 呼叫 `get_distinct_values('RESOURCEFAMILYNAME')`
|
||||
- **THEN** 系統 SHALL 回傳該欄位的唯一值清單(排序後)
|
||||
- **AND** 自動過濾 `None` 和空字串
|
||||
|
||||
#### Scenario: Convenience methods for common columns
|
||||
- **WHEN** 呼叫 `get_resource_families()`
|
||||
- **THEN** 系統 SHALL 回傳 `RESOURCEFAMILYNAME` 欄位的唯一值清單
|
||||
- **AND** `get_workcenters()` 回傳 `WORKCENTERNAME` 唯一值
|
||||
- **AND** `get_departments()` 回傳 `PJ_DEPARTMENT` 唯一值
|
||||
|
||||
---
|
||||
|
||||
### Requirement: Resource Cache Status API
|
||||
|
||||
系統 SHALL 提供 API 查詢快取狀態。
|
||||
|
||||
#### Scenario: Get cache status
|
||||
- **WHEN** 呼叫 `get_cache_status()`
|
||||
- **THEN** 系統 SHALL 回傳包含以下欄位的 Dict:
|
||||
- `enabled`: 快取是否啟用
|
||||
- `loaded`: 快取是否已載入
|
||||
- `count`: 快取記錄數
|
||||
- `version`: 資料版本(MAX(LASTCHANGEDATE))
|
||||
- `updated_at`: 最後同步時間
|
||||
|
||||
#### Scenario: Status when cache not loaded
|
||||
- **WHEN** 呼叫 `get_cache_status()` 且快取尚未載入
|
||||
- **THEN** `loaded` SHALL 為 `false`
|
||||
- **AND** `count` SHALL 為 `0`
|
||||
|
||||
---
|
||||
|
||||
### Requirement: Resource Cache Fallback
|
||||
|
||||
當 Redis 不可用時,系統 SHALL 自動降級到直接查詢 Oracle。
|
||||
|
||||
#### Scenario: Redis unavailable triggers fallback
|
||||
- **WHEN** Redis 連線失敗或超時
|
||||
- **THEN** 系統 SHALL 直接查詢 Oracle `DW_MES_RESOURCE`
|
||||
- **AND** 記錄 warning 日誌
|
||||
|
||||
#### Scenario: Cache empty triggers fallback
|
||||
- **WHEN** Redis 可用但 `{prefix}:resource:data` 不存在或為空
|
||||
- **THEN** 系統 SHALL 直接查詢 Oracle `DW_MES_RESOURCE`
|
||||
|
||||
#### Scenario: RESOURCE_CACHE_ENABLED=false disables cache
|
||||
- **WHEN** 環境變數 `RESOURCE_CACHE_ENABLED` 設為 `false`
|
||||
- **THEN** 系統 SHALL 完全跳過 Redis,直接查詢 Oracle
|
||||
- **AND** 背景同步任務 SHALL 不啟動
|
||||
|
||||
---
|
||||
|
||||
### Requirement: Resource Cache Configuration
|
||||
|
||||
系統 SHALL 支援透過環境變數配置快取行為。
|
||||
|
||||
#### Scenario: Custom sync interval
|
||||
- **WHEN** 環境變數 `RESOURCE_SYNC_INTERVAL` 設為 `7200`
|
||||
- **THEN** 背景任務 SHALL 每 7200 秒(2 小時)執行一次
|
||||
|
||||
#### Scenario: Default configuration
|
||||
- **WHEN** 環境變數未設定
|
||||
- **THEN** 系統 SHALL 使用預設值:
|
||||
- `RESOURCE_CACHE_ENABLED`: `true`
|
||||
- `RESOURCE_SYNC_INTERVAL`: `14400`(4 小時)
|
||||
|
||||
#### Scenario: Key prefix from environment
|
||||
- **WHEN** 環境變數 `REDIS_KEY_PREFIX` 設為 `my_app`
|
||||
- **THEN** 所有 resource 快取 keys SHALL 使用 `my_app:resource:*` 前綴
|
||||
|
||||
---
|
||||
|
||||
### Requirement: Health Check Integration
|
||||
|
||||
健康檢查 SHALL 包含 Resource 快取狀態。
|
||||
|
||||
#### Scenario: Resource cache status in health check
|
||||
- **WHEN** 呼叫 `GET /health` 且 resource 快取可用
|
||||
- **THEN** 回應 body SHALL 包含 `resource_cache` 區塊:
|
||||
```json
|
||||
{
|
||||
"resource_cache": {
|
||||
"enabled": true,
|
||||
"loaded": true,
|
||||
"count": 3500,
|
||||
"version": "2026-01-29 10:30:00",
|
||||
"updated_at": "2026-01-29 14:00:00"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Scenario: Resource cache not loaded warning
|
||||
- **WHEN** 呼叫 `GET /health` 且 resource 快取啟用但未載入
|
||||
- **THEN** 回應 body 的 `warnings` SHALL 包含 "Resource cache not loaded"
|
||||
@@ -5,11 +5,11 @@
|
||||
TABLES_CONFIG = {
|
||||
'即時數據表 (DWH)': [
|
||||
{
|
||||
'name': 'DWH.DW_PJ_LOT_V',
|
||||
'display_name': 'WIP 即時批次 (DW_PJ_LOT_V)',
|
||||
'name': 'DW_MES_LOT_V',
|
||||
'display_name': 'WIP 即時批次 (DW_MES_LOT_V)',
|
||||
'row_count': 10000, # 動態變化,約 9000-12000
|
||||
'time_field': 'SYS_DATE',
|
||||
'description': 'DWH 即時 WIP View - 每 5 分鐘更新,包含完整批次狀態、工站、設備、Hold 原因等 70 欄位'
|
||||
'description': 'MES 即時 WIP View - 每 5 分鐘更新,包含完整批次狀態、工站、設備、Hold 原因等 70 欄位'
|
||||
}
|
||||
],
|
||||
'現況快照表': [
|
||||
|
||||
@@ -84,7 +84,7 @@ def get_cached_wip_data() -> Optional[pd.DataFrame]:
|
||||
"""Get cached WIP data from Redis.
|
||||
|
||||
Returns:
|
||||
DataFrame with full DW_PJ_LOT_V data, or None if cache miss.
|
||||
DataFrame with full DW_MES_LOT_V data, or None if cache miss.
|
||||
"""
|
||||
if not REDIS_ENABLED:
|
||||
return None
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""Background task for updating WIP cache from Oracle to Redis."""
|
||||
"""Background task for updating WIP and Resource cache from Oracle to Redis."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
@@ -7,6 +7,7 @@ import json
|
||||
import logging
|
||||
import os
|
||||
import threading
|
||||
import time
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
|
||||
@@ -27,7 +28,10 @@ logger = logging.getLogger('mes_dashboard.cache_updater')
|
||||
# ============================================================
|
||||
|
||||
CACHE_CHECK_INTERVAL = int(os.getenv('CACHE_CHECK_INTERVAL', '600')) # 10 minutes
|
||||
WIP_VIEW = "DWH.DW_PJ_LOT_V"
|
||||
WIP_VIEW = "DW_MES_LOT_V"
|
||||
|
||||
# Resource cache sync interval (default: 4 hours)
|
||||
RESOURCE_SYNC_INTERVAL = int(os.getenv('RESOURCE_SYNC_INTERVAL', '14400'))
|
||||
|
||||
# ============================================================
|
||||
# Cache Updater Class
|
||||
@@ -44,9 +48,11 @@ class CacheUpdater:
|
||||
interval: Check interval in seconds (default: 600)
|
||||
"""
|
||||
self.interval = interval
|
||||
self.resource_sync_interval = RESOURCE_SYNC_INTERVAL
|
||||
self._stop_event = threading.Event()
|
||||
self._thread: Optional[threading.Thread] = None
|
||||
self._is_running = False
|
||||
self._last_resource_sync: Optional[float] = None
|
||||
|
||||
def start(self) -> None:
|
||||
"""Start the background update thread."""
|
||||
@@ -96,10 +102,14 @@ class CacheUpdater:
|
||||
logger.info("Performing initial cache load...")
|
||||
self._check_and_update(force=True)
|
||||
|
||||
# Initial resource cache load
|
||||
self._check_resource_update(force=True)
|
||||
|
||||
# Periodic updates
|
||||
while not self._stop_event.wait(self.interval):
|
||||
try:
|
||||
self._check_and_update()
|
||||
self._check_resource_update()
|
||||
except Exception as e:
|
||||
logger.error(f"Cache update failed: {e}", exc_info=True)
|
||||
|
||||
@@ -182,7 +192,7 @@ class CacheUpdater:
|
||||
return None
|
||||
|
||||
def _load_full_table(self) -> Optional[pd.DataFrame]:
|
||||
"""Load entire DW_PJ_LOT_V table from Oracle.
|
||||
"""Load entire DW_MES_LOT_V table from Oracle.
|
||||
|
||||
Returns:
|
||||
DataFrame with all rows, or None if failed.
|
||||
@@ -234,6 +244,43 @@ class CacheUpdater:
|
||||
logger.error(f"Failed to update Redis cache: {e}")
|
||||
return False
|
||||
|
||||
def _check_resource_update(self, force: bool = False) -> bool:
|
||||
"""Check and update resource cache if needed.
|
||||
|
||||
Args:
|
||||
force: If True, update regardless of interval.
|
||||
|
||||
Returns:
|
||||
True if cache was updated.
|
||||
"""
|
||||
from mes_dashboard.services.resource_cache import (
|
||||
refresh_cache as refresh_resource_cache,
|
||||
RESOURCE_CACHE_ENABLED,
|
||||
)
|
||||
|
||||
if not RESOURCE_CACHE_ENABLED:
|
||||
return False
|
||||
|
||||
# Check if sync is needed based on interval
|
||||
now = time.time()
|
||||
if not force and self._last_resource_sync is not None:
|
||||
elapsed = now - self._last_resource_sync
|
||||
if elapsed < self.resource_sync_interval:
|
||||
logger.debug(
|
||||
f"Resource sync not due yet ({elapsed:.0f}s < {self.resource_sync_interval}s)"
|
||||
)
|
||||
return False
|
||||
|
||||
# Perform sync
|
||||
logger.info("Checking resource cache for updates...")
|
||||
try:
|
||||
updated = refresh_resource_cache(force=force)
|
||||
self._last_resource_sync = now
|
||||
return updated
|
||||
except Exception as e:
|
||||
logger.error(f"Resource cache update failed: {e}", exc_info=True)
|
||||
return False
|
||||
|
||||
|
||||
# ============================================================
|
||||
# Global Instance
|
||||
|
||||
@@ -66,10 +66,10 @@ def check_redis() -> tuple[str, str | None]:
|
||||
|
||||
|
||||
def get_cache_status() -> dict:
|
||||
"""Get current cache status.
|
||||
"""Get current WIP cache status.
|
||||
|
||||
Returns:
|
||||
Dict with cache status information.
|
||||
Dict with WIP cache status information.
|
||||
"""
|
||||
return {
|
||||
'enabled': REDIS_ENABLED,
|
||||
@@ -78,6 +78,23 @@ def get_cache_status() -> dict:
|
||||
}
|
||||
|
||||
|
||||
def get_resource_cache_status() -> dict:
|
||||
"""Get current resource cache status.
|
||||
|
||||
Returns:
|
||||
Dict with resource cache status information.
|
||||
"""
|
||||
from mes_dashboard.services.resource_cache import (
|
||||
get_cache_status as get_res_cache_status,
|
||||
RESOURCE_CACHE_ENABLED,
|
||||
)
|
||||
|
||||
if not RESOURCE_CACHE_ENABLED:
|
||||
return {'enabled': False}
|
||||
|
||||
return get_res_cache_status()
|
||||
|
||||
|
||||
@health_bp.route('/health', methods=['GET'])
|
||||
def health_check():
|
||||
"""Health check endpoint.
|
||||
@@ -112,10 +129,16 @@ def health_check():
|
||||
status = 'healthy'
|
||||
http_code = 200
|
||||
|
||||
# Check resource cache status
|
||||
resource_cache = get_resource_cache_status()
|
||||
if resource_cache.get('enabled') and not resource_cache.get('loaded'):
|
||||
warnings.append("Resource cache not loaded")
|
||||
|
||||
response = {
|
||||
'status': status,
|
||||
'services': services,
|
||||
'cache': get_cache_status()
|
||||
'cache': get_cache_status(),
|
||||
'resource_cache': resource_cache
|
||||
}
|
||||
|
||||
if errors:
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
"""WIP API routes for MES Dashboard.
|
||||
|
||||
Contains Flask Blueprint for WIP-related API endpoints.
|
||||
Uses DWH.DW_PJ_LOT_V view for real-time WIP data.
|
||||
Uses DW_MES_LOT_V view for real-time WIP data.
|
||||
"""
|
||||
|
||||
from flask import Blueprint, jsonify, request
|
||||
|
||||
@@ -11,11 +11,6 @@ from datetime import datetime, timedelta
|
||||
from typing import Optional, Dict, List, Any
|
||||
|
||||
from mes_dashboard.core.database import read_sql_df
|
||||
from mes_dashboard.config.constants import (
|
||||
EXCLUDED_LOCATIONS,
|
||||
EXCLUDED_ASSET_STATUSES,
|
||||
EQUIPMENT_TYPE_FILTER,
|
||||
)
|
||||
|
||||
logger = logging.getLogger('mes_dashboard.filter_cache')
|
||||
|
||||
@@ -24,7 +19,7 @@ logger = logging.getLogger('mes_dashboard.filter_cache')
|
||||
# ============================================================
|
||||
|
||||
CACHE_TTL_SECONDS = 3600 # 1 hour cache TTL
|
||||
WIP_VIEW = "DWH.DW_PJ_LOT_V"
|
||||
WIP_VIEW = "DW_MES_LOT_V"
|
||||
|
||||
# ============================================================
|
||||
# Cache Storage
|
||||
@@ -33,7 +28,6 @@ WIP_VIEW = "DWH.DW_PJ_LOT_V"
|
||||
_CACHE = {
|
||||
'workcenter_groups': None, # List of {name, sequence}
|
||||
'workcenter_mapping': None, # Dict {workcentername: {group, sequence}}
|
||||
'resource_families': None, # List of family names
|
||||
'last_refresh': None,
|
||||
'is_loading': False,
|
||||
}
|
||||
@@ -85,20 +79,6 @@ def get_workcenters_for_groups(groups: List[str]) -> List[str]:
|
||||
return result
|
||||
|
||||
|
||||
# ============================================================
|
||||
# Resource Family Functions
|
||||
# ============================================================
|
||||
|
||||
def get_resource_families(force_refresh: bool = False) -> Optional[List[str]]:
|
||||
"""Get list of resource family names.
|
||||
|
||||
Returns:
|
||||
Sorted list of RESOURCEFAMILYNAME values, or None if loading fails.
|
||||
"""
|
||||
_ensure_cache_loaded(force_refresh)
|
||||
return _CACHE.get('resource_families')
|
||||
|
||||
|
||||
# ============================================================
|
||||
# Cache Management
|
||||
# ============================================================
|
||||
@@ -117,7 +97,6 @@ def get_cache_status() -> Dict[str, Any]:
|
||||
'is_loading': _CACHE.get('is_loading', False),
|
||||
'workcenter_groups_count': len(_CACHE.get('workcenter_groups') or []),
|
||||
'workcenter_mapping_count': len(_CACHE.get('workcenter_mapping') or {}),
|
||||
'resource_families_count': len(_CACHE.get('resource_families') or []),
|
||||
}
|
||||
|
||||
|
||||
@@ -165,22 +144,18 @@ def _load_cache() -> bool:
|
||||
_CACHE['is_loading'] = True
|
||||
|
||||
try:
|
||||
# Load workcenter groups from DW_PJ_LOT_V
|
||||
# Load workcenter groups from DW_MES_LOT_V
|
||||
wc_groups, wc_mapping = _load_workcenter_data()
|
||||
|
||||
# Load resource families from DW_MES_RESOURCE
|
||||
families = _load_resource_families()
|
||||
|
||||
with _CACHE_LOCK:
|
||||
_CACHE['workcenter_groups'] = wc_groups
|
||||
_CACHE['workcenter_mapping'] = wc_mapping
|
||||
_CACHE['resource_families'] = families
|
||||
_CACHE['last_refresh'] = datetime.now()
|
||||
_CACHE['is_loading'] = False
|
||||
|
||||
logger.info(
|
||||
f"Filter cache refreshed: {len(wc_groups or [])} groups, "
|
||||
f"{len(wc_mapping or {})} workcenters, {len(families or [])} families"
|
||||
f"{len(wc_mapping or {})} workcenters"
|
||||
)
|
||||
return True
|
||||
|
||||
@@ -192,11 +167,24 @@ def _load_cache() -> bool:
|
||||
|
||||
|
||||
def _load_workcenter_data():
|
||||
"""Load workcenter group data from DW_PJ_LOT_V.
|
||||
"""Load workcenter group data from WIP cache (Redis) or fallback to Oracle.
|
||||
|
||||
Returns:
|
||||
Tuple of (groups_list, mapping_dict)
|
||||
"""
|
||||
# Try to load from WIP Redis cache first
|
||||
try:
|
||||
from mes_dashboard.core.cache import get_cached_wip_data
|
||||
|
||||
df = get_cached_wip_data()
|
||||
if df is not None and not df.empty:
|
||||
logger.debug("Loading workcenter groups from WIP cache")
|
||||
return _extract_workcenter_data_from_df(df)
|
||||
except Exception as exc:
|
||||
logger.warning(f"Failed to load from WIP cache: {exc}")
|
||||
|
||||
# Fallback to Oracle direct query
|
||||
logger.debug("Falling back to Oracle for workcenter groups")
|
||||
try:
|
||||
sql = f"""
|
||||
SELECT DISTINCT
|
||||
@@ -211,72 +199,53 @@ def _load_workcenter_data():
|
||||
df = read_sql_df(sql)
|
||||
|
||||
if df is None or df.empty:
|
||||
logger.warning("No workcenter data found in DW_PJ_LOT_V")
|
||||
logger.warning("No workcenter data found in DW_MES_LOT_V")
|
||||
return [], {}
|
||||
|
||||
# Build groups list (unique groups, take minimum sequence for each group)
|
||||
groups_df = df.groupby('WORKCENTER_GROUP')['WORKCENTERSEQUENCE_GROUP'].min().reset_index()
|
||||
groups_df = groups_df.sort_values('WORKCENTERSEQUENCE_GROUP')
|
||||
|
||||
groups = []
|
||||
for _, row in groups_df.iterrows():
|
||||
groups.append({
|
||||
'name': row['WORKCENTER_GROUP'],
|
||||
'sequence': int(row['WORKCENTERSEQUENCE_GROUP'] or 999)
|
||||
})
|
||||
|
||||
# Build mapping dict
|
||||
mapping = {}
|
||||
for _, row in df.iterrows():
|
||||
wc_name = row['WORKCENTERNAME']
|
||||
mapping[wc_name] = {
|
||||
'id': row.get('WORKCENTERID'),
|
||||
'group': row['WORKCENTER_GROUP'],
|
||||
'sequence': int(row['WORKCENTERSEQUENCE_GROUP'] or 999)
|
||||
}
|
||||
|
||||
return groups, mapping
|
||||
return _extract_workcenter_data_from_df(df)
|
||||
|
||||
except Exception as exc:
|
||||
logger.error(f"Failed to load workcenter data: {exc}")
|
||||
return [], {}
|
||||
|
||||
|
||||
def _load_resource_families():
|
||||
"""Load resource family data from DW_MES_RESOURCE.
|
||||
def _extract_workcenter_data_from_df(df):
|
||||
"""Extract workcenter groups and mapping from DataFrame.
|
||||
|
||||
Args:
|
||||
df: DataFrame with WORKCENTERNAME, WORKCENTER_GROUP, WORKCENTERSEQUENCE_GROUP columns
|
||||
|
||||
Returns:
|
||||
Sorted list of family names
|
||||
Tuple of (groups_list, mapping_dict)
|
||||
"""
|
||||
try:
|
||||
# Build exclusion filters (note: column name is LOCATIONNAME, not LOCATION)
|
||||
location_list = ", ".join(f"'{loc}'" for loc in EXCLUDED_LOCATIONS)
|
||||
location_filter = f"AND (r.LOCATIONNAME IS NULL OR r.LOCATIONNAME NOT IN ({location_list}))" if EXCLUDED_LOCATIONS else ""
|
||||
# Filter to rows with valid workcenter group
|
||||
df = df[df['WORKCENTER_GROUP'].notna() & df['WORKCENTERNAME'].notna()]
|
||||
|
||||
# Note: Column name is PJ_ASSETSSTATUS (double S), not ASSETSTATUS
|
||||
status_list = ", ".join(f"'{s}'" for s in EXCLUDED_ASSET_STATUSES)
|
||||
asset_status_filter = f"AND r.PJ_ASSETSSTATUS NOT IN ({status_list})" if EXCLUDED_ASSET_STATUSES else ""
|
||||
if df.empty:
|
||||
return [], {}
|
||||
|
||||
sql = f"""
|
||||
SELECT DISTINCT RESOURCEFAMILYNAME
|
||||
FROM DW_MES_RESOURCE r
|
||||
WHERE {EQUIPMENT_TYPE_FILTER}
|
||||
{location_filter}
|
||||
{asset_status_filter}
|
||||
AND RESOURCEFAMILYNAME IS NOT NULL
|
||||
"""
|
||||
df = read_sql_df(sql)
|
||||
# Build groups list (unique groups, take minimum sequence for each group)
|
||||
groups_df = df.groupby('WORKCENTER_GROUP')['WORKCENTERSEQUENCE_GROUP'].min().reset_index()
|
||||
groups_df = groups_df.sort_values('WORKCENTERSEQUENCE_GROUP')
|
||||
|
||||
if df is None or df.empty:
|
||||
logger.warning("No resource family data found")
|
||||
return []
|
||||
groups = []
|
||||
for _, row in groups_df.iterrows():
|
||||
groups.append({
|
||||
'name': row['WORKCENTER_GROUP'],
|
||||
'sequence': int(row['WORKCENTERSEQUENCE_GROUP'] or 999)
|
||||
})
|
||||
|
||||
families = df['RESOURCEFAMILYNAME'].dropna().unique().tolist()
|
||||
return sorted(families)
|
||||
# Build mapping dict
|
||||
mapping = {}
|
||||
for _, row in df.iterrows():
|
||||
wc_name = row['WORKCENTERNAME']
|
||||
mapping[wc_name] = {
|
||||
'id': row.get('WORKCENTERID'),
|
||||
'group': row['WORKCENTER_GROUP'],
|
||||
'sequence': int(row['WORKCENTERSEQUENCE_GROUP'] or 999)
|
||||
}
|
||||
|
||||
except Exception as exc:
|
||||
logger.error(f"Failed to load resource families: {exc}")
|
||||
return []
|
||||
return groups, mapping
|
||||
|
||||
|
||||
# ============================================================
|
||||
|
||||
476
src/mes_dashboard/services/resource_cache.py
Normal file
476
src/mes_dashboard/services/resource_cache.py
Normal file
@@ -0,0 +1,476 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""Resource Cache - DW_MES_RESOURCE 全表快取模組.
|
||||
|
||||
全表快取套用全域篩選後的設備主檔資料至 Redis。
|
||||
提供統一 API 供各模組取用設備資料和篩選器選項。
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import io
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from datetime import datetime
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
import pandas as pd
|
||||
|
||||
from mes_dashboard.core.redis_client import (
|
||||
get_redis_client,
|
||||
redis_available,
|
||||
REDIS_ENABLED,
|
||||
REDIS_KEY_PREFIX,
|
||||
)
|
||||
from mes_dashboard.core.database import read_sql_df
|
||||
from mes_dashboard.config.constants import (
|
||||
EXCLUDED_LOCATIONS,
|
||||
EXCLUDED_ASSET_STATUSES,
|
||||
EQUIPMENT_TYPE_FILTER,
|
||||
)
|
||||
|
||||
logger = logging.getLogger('mes_dashboard.resource_cache')
|
||||
|
||||
# ============================================================
|
||||
# Configuration
|
||||
# ============================================================
|
||||
|
||||
RESOURCE_CACHE_ENABLED = os.getenv('RESOURCE_CACHE_ENABLED', 'true').lower() == 'true'
|
||||
RESOURCE_SYNC_INTERVAL = int(os.getenv('RESOURCE_SYNC_INTERVAL', '14400')) # 4 hours
|
||||
|
||||
# Redis key helpers
|
||||
def _get_key(key: str) -> str:
|
||||
"""Get full Redis key with resource prefix."""
|
||||
return f"{REDIS_KEY_PREFIX}:resource:{key}"
|
||||
|
||||
|
||||
# ============================================================
|
||||
# Internal: Oracle Load Functions
|
||||
# ============================================================
|
||||
|
||||
def _build_filter_sql() -> str:
|
||||
"""Build SQL WHERE clause for global filters."""
|
||||
conditions = [EQUIPMENT_TYPE_FILTER.strip()]
|
||||
|
||||
# Location filter
|
||||
if EXCLUDED_LOCATIONS:
|
||||
locations_list = ", ".join(f"'{loc}'" for loc in EXCLUDED_LOCATIONS)
|
||||
conditions.append(
|
||||
f"(LOCATIONNAME IS NULL OR LOCATIONNAME NOT IN ({locations_list}))"
|
||||
)
|
||||
|
||||
# Asset status filter
|
||||
if EXCLUDED_ASSET_STATUSES:
|
||||
status_list = ", ".join(f"'{s}'" for s in EXCLUDED_ASSET_STATUSES)
|
||||
conditions.append(
|
||||
f"(PJ_ASSETSSTATUS IS NULL OR PJ_ASSETSSTATUS NOT IN ({status_list}))"
|
||||
)
|
||||
|
||||
return " AND ".join(conditions)
|
||||
|
||||
|
||||
def _load_from_oracle() -> Optional[pd.DataFrame]:
|
||||
"""從 Oracle 載入全表資料(套用全域篩選).
|
||||
|
||||
Returns:
|
||||
DataFrame with all columns, or None if query failed.
|
||||
"""
|
||||
filter_sql = _build_filter_sql()
|
||||
sql = f"""
|
||||
SELECT *
|
||||
FROM DW_MES_RESOURCE
|
||||
WHERE {filter_sql}
|
||||
"""
|
||||
try:
|
||||
df = read_sql_df(sql)
|
||||
if df is not None:
|
||||
logger.info(f"Loaded {len(df)} resources from Oracle")
|
||||
return df
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to load resources from Oracle: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def _get_version_from_oracle() -> Optional[str]:
|
||||
"""取得 Oracle 資料版本(MAX(LASTCHANGEDATE)).
|
||||
|
||||
Returns:
|
||||
Version string (ISO format), or None if query failed.
|
||||
"""
|
||||
filter_sql = _build_filter_sql()
|
||||
sql = f"""
|
||||
SELECT MAX(LASTCHANGEDATE) as VERSION
|
||||
FROM DW_MES_RESOURCE
|
||||
WHERE {filter_sql}
|
||||
"""
|
||||
try:
|
||||
df = read_sql_df(sql)
|
||||
if df is not None and not df.empty:
|
||||
version = df.iloc[0]['VERSION']
|
||||
if version is not None:
|
||||
if hasattr(version, 'isoformat'):
|
||||
return version.isoformat()
|
||||
return str(version)
|
||||
return None
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get version from Oracle: {e}")
|
||||
return None
|
||||
|
||||
|
||||
# ============================================================
|
||||
# Internal: Redis Functions
|
||||
# ============================================================
|
||||
|
||||
def _get_version_from_redis() -> Optional[str]:
|
||||
"""取得 Redis 快取版本.
|
||||
|
||||
Returns:
|
||||
Cached version string, or None.
|
||||
"""
|
||||
client = get_redis_client()
|
||||
if client is None:
|
||||
return None
|
||||
|
||||
try:
|
||||
return client.get(_get_key("meta:version"))
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to get version from Redis: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def _sync_to_redis(df: pd.DataFrame, version: str) -> bool:
|
||||
"""同步至 Redis(使用 pipeline 確保原子性).
|
||||
|
||||
Args:
|
||||
df: DataFrame with resource data.
|
||||
version: Version string (MAX(LASTCHANGEDATE)).
|
||||
|
||||
Returns:
|
||||
True if sync was successful.
|
||||
"""
|
||||
client = get_redis_client()
|
||||
if client is None:
|
||||
return False
|
||||
|
||||
try:
|
||||
# Convert DataFrame to JSON
|
||||
# Handle datetime columns
|
||||
df_copy = df.copy()
|
||||
for col in df_copy.select_dtypes(include=['datetime64']).columns:
|
||||
df_copy[col] = df_copy[col].astype(str)
|
||||
|
||||
data_json = df_copy.to_json(orient='records', force_ascii=False)
|
||||
|
||||
# Atomic update using pipeline
|
||||
now = datetime.now().isoformat()
|
||||
pipe = client.pipeline()
|
||||
pipe.set(_get_key("data"), data_json)
|
||||
pipe.set(_get_key("meta:version"), version)
|
||||
pipe.set(_get_key("meta:updated"), now)
|
||||
pipe.set(_get_key("meta:count"), str(len(df)))
|
||||
pipe.execute()
|
||||
|
||||
logger.info(f"Resource cache synced: {len(df)} rows, version={version}")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to sync to Redis: {e}")
|
||||
return False
|
||||
|
||||
|
||||
def _get_cached_data() -> Optional[pd.DataFrame]:
|
||||
"""Get cached resource data from Redis.
|
||||
|
||||
Returns:
|
||||
DataFrame with resource data, or None if cache miss.
|
||||
"""
|
||||
if not REDIS_ENABLED or not RESOURCE_CACHE_ENABLED:
|
||||
return None
|
||||
|
||||
client = get_redis_client()
|
||||
if client is None:
|
||||
return None
|
||||
|
||||
try:
|
||||
data_json = client.get(_get_key("data"))
|
||||
if data_json is None:
|
||||
logger.debug("Resource cache miss: no data in Redis")
|
||||
return None
|
||||
|
||||
df = pd.read_json(io.StringIO(data_json), orient='records')
|
||||
logger.debug(f"Resource cache hit: loaded {len(df)} rows from Redis")
|
||||
return df
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to read resource cache: {e}")
|
||||
return None
|
||||
|
||||
|
||||
# ============================================================
|
||||
# Cache Management API
|
||||
# ============================================================
|
||||
|
||||
def refresh_cache(force: bool = False) -> bool:
|
||||
"""手動刷新快取.
|
||||
|
||||
Args:
|
||||
force: 強制刷新,忽略版本檢查.
|
||||
|
||||
Returns:
|
||||
True if cache was refreshed.
|
||||
"""
|
||||
if not REDIS_ENABLED or not RESOURCE_CACHE_ENABLED:
|
||||
logger.info("Resource cache is disabled")
|
||||
return False
|
||||
|
||||
if not redis_available():
|
||||
logger.warning("Redis not available, cannot refresh resource cache")
|
||||
return False
|
||||
|
||||
try:
|
||||
# Get versions
|
||||
oracle_version = _get_version_from_oracle()
|
||||
if oracle_version is None:
|
||||
logger.error("Failed to get version from Oracle")
|
||||
return False
|
||||
|
||||
redis_version = _get_version_from_redis()
|
||||
|
||||
# Check if update needed
|
||||
if not force and redis_version == oracle_version:
|
||||
logger.debug(f"Resource cache version unchanged ({oracle_version}), skipping")
|
||||
return False
|
||||
|
||||
logger.info(f"Resource cache version changed: {redis_version} -> {oracle_version}")
|
||||
|
||||
# Load and sync
|
||||
df = _load_from_oracle()
|
||||
if df is None or df.empty:
|
||||
logger.error("Failed to load resources from Oracle")
|
||||
return False
|
||||
|
||||
return _sync_to_redis(df, oracle_version)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to refresh resource cache: {e}", exc_info=True)
|
||||
return False
|
||||
|
||||
|
||||
def init_cache() -> None:
|
||||
"""初始化快取(應用啟動時呼叫)."""
|
||||
if not REDIS_ENABLED or not RESOURCE_CACHE_ENABLED:
|
||||
logger.info("Resource cache is disabled, skipping init")
|
||||
return
|
||||
|
||||
if not redis_available():
|
||||
logger.warning("Redis not available during resource cache init")
|
||||
return
|
||||
|
||||
# Check if cache exists
|
||||
client = get_redis_client()
|
||||
if client is None:
|
||||
return
|
||||
|
||||
try:
|
||||
exists = client.exists(_get_key("data"))
|
||||
if not exists:
|
||||
logger.info("Resource cache empty, performing initial load...")
|
||||
refresh_cache(force=True)
|
||||
else:
|
||||
logger.info("Resource cache already populated")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to init resource cache: {e}")
|
||||
|
||||
|
||||
def get_cache_status() -> Dict[str, Any]:
|
||||
"""取得快取狀態資訊.
|
||||
|
||||
Returns:
|
||||
Dict with cache status.
|
||||
"""
|
||||
status = {
|
||||
'enabled': REDIS_ENABLED and RESOURCE_CACHE_ENABLED,
|
||||
'loaded': False,
|
||||
'count': 0,
|
||||
'version': None,
|
||||
'updated_at': None,
|
||||
}
|
||||
|
||||
if not status['enabled']:
|
||||
return status
|
||||
|
||||
client = get_redis_client()
|
||||
if client is None:
|
||||
return status
|
||||
|
||||
try:
|
||||
status['loaded'] = client.exists(_get_key("data")) > 0
|
||||
if status['loaded']:
|
||||
count_str = client.get(_get_key("meta:count"))
|
||||
status['count'] = int(count_str) if count_str else 0
|
||||
status['version'] = client.get(_get_key("meta:version"))
|
||||
status['updated_at'] = client.get(_get_key("meta:updated"))
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to get resource cache status: {e}")
|
||||
|
||||
return status
|
||||
|
||||
|
||||
# ============================================================
|
||||
# Query API
|
||||
# ============================================================
|
||||
|
||||
def get_all_resources() -> List[Dict]:
|
||||
"""取得所有快取中的設備資料(全欄位).
|
||||
|
||||
Falls back to Oracle if cache unavailable.
|
||||
|
||||
Returns:
|
||||
List of resource dicts.
|
||||
"""
|
||||
# Try cache first
|
||||
df = _get_cached_data()
|
||||
if df is not None:
|
||||
return df.to_dict(orient='records')
|
||||
|
||||
# Fallback to Oracle
|
||||
logger.info("Resource cache miss, falling back to Oracle")
|
||||
df = _load_from_oracle()
|
||||
if df is not None:
|
||||
return df.to_dict(orient='records')
|
||||
|
||||
return []
|
||||
|
||||
|
||||
def get_resource_by_id(resource_id: str) -> Optional[Dict]:
|
||||
"""依 RESOURCEID 取得單筆設備資料.
|
||||
|
||||
Args:
|
||||
resource_id: The RESOURCEID to look up.
|
||||
|
||||
Returns:
|
||||
Resource dict, or None if not found.
|
||||
"""
|
||||
resources = get_all_resources()
|
||||
for r in resources:
|
||||
if r.get('RESOURCEID') == resource_id:
|
||||
return r
|
||||
return None
|
||||
|
||||
|
||||
def get_resources_by_ids(resource_ids: List[str]) -> List[Dict]:
|
||||
"""依 RESOURCEID 清單批次取得設備資料.
|
||||
|
||||
Args:
|
||||
resource_ids: List of RESOURCEIDs to look up.
|
||||
|
||||
Returns:
|
||||
List of matching resource dicts.
|
||||
"""
|
||||
id_set = set(resource_ids)
|
||||
resources = get_all_resources()
|
||||
return [r for r in resources if r.get('RESOURCEID') in id_set]
|
||||
|
||||
|
||||
def get_resources_by_filter(
|
||||
workcenters: Optional[List[str]] = None,
|
||||
families: Optional[List[str]] = None,
|
||||
departments: Optional[List[str]] = None,
|
||||
locations: Optional[List[str]] = None,
|
||||
is_production: Optional[bool] = None,
|
||||
is_key: Optional[bool] = None,
|
||||
is_monitor: Optional[bool] = None,
|
||||
) -> List[Dict]:
|
||||
"""依條件篩選設備資料(在 Python 端篩選).
|
||||
|
||||
Args:
|
||||
workcenters: Filter by WORKCENTERNAME values.
|
||||
families: Filter by RESOURCEFAMILYNAME values.
|
||||
departments: Filter by PJ_DEPARTMENT values.
|
||||
locations: Filter by LOCATIONNAME values.
|
||||
is_production: Filter by PJ_ISPRODUCTION flag.
|
||||
is_key: Filter by PJ_ISKEY flag.
|
||||
is_monitor: Filter by PJ_ISMONITOR flag.
|
||||
|
||||
Returns:
|
||||
List of matching resource dicts.
|
||||
"""
|
||||
resources = get_all_resources()
|
||||
|
||||
result = []
|
||||
for r in resources:
|
||||
# Apply filters
|
||||
if workcenters and r.get('WORKCENTERNAME') not in workcenters:
|
||||
continue
|
||||
if families and r.get('RESOURCEFAMILYNAME') not in families:
|
||||
continue
|
||||
if departments and r.get('PJ_DEPARTMENT') not in departments:
|
||||
continue
|
||||
if locations and r.get('LOCATIONNAME') not in locations:
|
||||
continue
|
||||
if is_production is not None:
|
||||
val = r.get('PJ_ISPRODUCTION')
|
||||
if (val == 1) != is_production:
|
||||
continue
|
||||
if is_key is not None:
|
||||
val = r.get('PJ_ISKEY')
|
||||
if (val == 1) != is_key:
|
||||
continue
|
||||
if is_monitor is not None:
|
||||
val = r.get('PJ_ISMONITOR')
|
||||
if (val == 1) != is_monitor:
|
||||
continue
|
||||
|
||||
result.append(r)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
# ============================================================
|
||||
# Distinct Values API (for filters)
|
||||
# ============================================================
|
||||
|
||||
def get_distinct_values(column: str) -> List[str]:
|
||||
"""取得指定欄位的唯一值清單(排序後).
|
||||
|
||||
Args:
|
||||
column: Column name (e.g., 'RESOURCEFAMILYNAME').
|
||||
|
||||
Returns:
|
||||
Sorted list of unique values (excluding None, NaN, and empty strings).
|
||||
"""
|
||||
resources = get_all_resources()
|
||||
values = set()
|
||||
for r in resources:
|
||||
val = r.get(column)
|
||||
# Skip None, empty strings, and NaN (pandas converts NaN to float)
|
||||
if val is None or val == '':
|
||||
continue
|
||||
# Check for NaN (float type and is NaN)
|
||||
if isinstance(val, float) and pd.isna(val):
|
||||
continue
|
||||
values.add(str(val) if not isinstance(val, str) else val)
|
||||
return sorted(values)
|
||||
|
||||
|
||||
def get_resource_families() -> List[str]:
|
||||
"""取得型號清單(便捷方法)."""
|
||||
return get_distinct_values('RESOURCEFAMILYNAME')
|
||||
|
||||
|
||||
def get_workcenters() -> List[str]:
|
||||
"""取得站點清單(便捷方法)."""
|
||||
return get_distinct_values('WORKCENTERNAME')
|
||||
|
||||
|
||||
def get_departments() -> List[str]:
|
||||
"""取得部門清單(便捷方法)."""
|
||||
return get_distinct_values('PJ_DEPARTMENT')
|
||||
|
||||
|
||||
def get_locations() -> List[str]:
|
||||
"""取得區域清單(便捷方法)."""
|
||||
return get_distinct_values('LOCATIONNAME')
|
||||
|
||||
|
||||
def get_vendors() -> List[str]:
|
||||
"""取得供應商清單(便捷方法)."""
|
||||
return get_distinct_values('VENDORNAME')
|
||||
@@ -41,7 +41,7 @@ E10_STATUSES = ['PRD', 'SBY', 'UDT', 'SDT', 'EGT', 'NST']
|
||||
def get_filter_options() -> Optional[Dict[str, Any]]:
|
||||
"""Get filter options from cache.
|
||||
|
||||
Uses cached workcenter groups from DW_PJ_LOT_V and resource families from DW_MES_RESOURCE.
|
||||
Uses cached workcenter groups from DW_MES_LOT_V and resource families from resource_cache.
|
||||
|
||||
Returns:
|
||||
Dict with:
|
||||
@@ -49,10 +49,8 @@ def get_filter_options() -> Optional[Dict[str, Any]]:
|
||||
- 'families': List of family names sorted alphabetically
|
||||
Or None if cache loading fails.
|
||||
"""
|
||||
from mes_dashboard.services.filter_cache import (
|
||||
get_workcenter_groups,
|
||||
get_resource_families,
|
||||
)
|
||||
from mes_dashboard.services.filter_cache import get_workcenter_groups
|
||||
from mes_dashboard.services.resource_cache import get_resource_families
|
||||
|
||||
try:
|
||||
groups = get_workcenter_groups()
|
||||
|
||||
@@ -4,16 +4,16 @@
|
||||
Provides functions to query equipment status from DW_MES_RESOURCE and DW_MES_RESOURCESTATUS tables.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
from typing import Optional, Dict, List, Any
|
||||
|
||||
from mes_dashboard.core.database import get_db_connection, read_sql_df
|
||||
from mes_dashboard.core.utils import get_days_back, build_equipment_filter_sql
|
||||
from mes_dashboard.config.constants import (
|
||||
EXCLUDED_LOCATIONS,
|
||||
EXCLUDED_ASSET_STATUSES,
|
||||
DEFAULT_DAYS_BACK,
|
||||
)
|
||||
import pandas as pd
|
||||
from typing import Optional, Dict, List, Any
|
||||
|
||||
from mes_dashboard.core.database import get_db_connection, read_sql_df
|
||||
from mes_dashboard.core.utils import get_days_back, build_equipment_filter_sql
|
||||
from mes_dashboard.config.constants import (
|
||||
EXCLUDED_LOCATIONS,
|
||||
EXCLUDED_ASSET_STATUSES,
|
||||
DEFAULT_DAYS_BACK,
|
||||
)
|
||||
|
||||
|
||||
# ============================================================
|
||||
@@ -343,7 +343,8 @@ def query_resource_workcenter_status_matrix(days_back: int = 30) -> Optional[pd.
|
||||
def query_resource_filter_options(days_back: int = 30) -> Optional[Dict]:
|
||||
"""Get available filter options for resource queries.
|
||||
|
||||
Optimized: combines multiple queries into fewer database calls.
|
||||
Uses resource_cache for static resource data (workcenters, families, departments, locations).
|
||||
Only queries Oracle for dynamic status data.
|
||||
|
||||
Args:
|
||||
days_back: Number of days to look back
|
||||
@@ -351,36 +352,41 @@ def query_resource_filter_options(days_back: int = 30) -> Optional[Dict]:
|
||||
Returns:
|
||||
Dict with filter options or None if query fails.
|
||||
"""
|
||||
from mes_dashboard.services.resource_cache import (
|
||||
get_workcenters,
|
||||
get_resource_families,
|
||||
get_departments,
|
||||
get_locations,
|
||||
get_distinct_values,
|
||||
)
|
||||
|
||||
try:
|
||||
# Query from latest status data
|
||||
sql_latest = f"""
|
||||
SELECT
|
||||
WORKCENTERNAME,
|
||||
NEWSTATUSNAME,
|
||||
RESOURCEFAMILYNAME,
|
||||
PJ_DEPARTMENT
|
||||
FROM ({get_resource_latest_status_subquery(days_back)}) rs
|
||||
"""
|
||||
latest_df = read_sql_df(sql_latest)
|
||||
# Get static filter options from resource cache
|
||||
workcenters = get_workcenters()
|
||||
families = get_resource_families()
|
||||
departments = get_departments()
|
||||
locations = get_locations()
|
||||
assets_statuses = get_distinct_values('PJ_ASSETSSTATUS')
|
||||
|
||||
# Query from resource table for location and asset status
|
||||
sql_resource = """
|
||||
SELECT
|
||||
LOCATIONNAME,
|
||||
PJ_ASSETSSTATUS
|
||||
# Query only dynamic status data from Oracle
|
||||
# Note: Can't wrap CTE in subquery, so use inline approach
|
||||
sql_statuses = f"""
|
||||
WITH latest_txn AS (
|
||||
SELECT MAX(COALESCE(TXNDATE, LASTSTATUSCHANGEDATE)) AS MAX_TXNDATE
|
||||
FROM DW_MES_RESOURCESTATUS
|
||||
)
|
||||
SELECT DISTINCT s.NEWSTATUSNAME
|
||||
FROM DW_MES_RESOURCE r
|
||||
JOIN DW_MES_RESOURCESTATUS s ON r.RESOURCEID = s.HISTORYID
|
||||
CROSS JOIN latest_txn lt
|
||||
WHERE ((r.OBJECTCATEGORY = 'ASSEMBLY' AND r.OBJECTTYPE = 'ASSEMBLY')
|
||||
OR (r.OBJECTCATEGORY = 'WAFERSORT' AND r.OBJECTTYPE = 'WAFERSORT'))
|
||||
OR (r.OBJECTCATEGORY = 'WAFERSORT' AND r.OBJECTTYPE = 'WAFERSORT'))
|
||||
AND COALESCE(s.TXNDATE, s.LASTSTATUSCHANGEDATE) >= lt.MAX_TXNDATE - {days_back}
|
||||
AND s.NEWSTATUSNAME IS NOT NULL
|
||||
ORDER BY s.NEWSTATUSNAME
|
||||
"""
|
||||
resource_df = read_sql_df(sql_resource)
|
||||
|
||||
# Extract unique values
|
||||
workcenters = sorted(latest_df['WORKCENTERNAME'].dropna().unique().tolist())
|
||||
statuses = sorted(latest_df['NEWSTATUSNAME'].dropna().unique().tolist())
|
||||
families = sorted(latest_df['RESOURCEFAMILYNAME'].dropna().unique().tolist())
|
||||
departments = sorted(latest_df['PJ_DEPARTMENT'].dropna().unique().tolist())
|
||||
locations = sorted(resource_df['LOCATIONNAME'].dropna().unique().tolist())
|
||||
assets_statuses = sorted(resource_df['PJ_ASSETSSTATUS'].dropna().unique().tolist())
|
||||
status_df = read_sql_df(sql_statuses)
|
||||
statuses = status_df['NEWSTATUSNAME'].tolist() if status_df is not None else []
|
||||
|
||||
return {
|
||||
'workcenters': workcenters,
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""WIP (Work In Progress) query services for MES Dashboard.
|
||||
|
||||
Provides functions to query WIP data from DWH.DW_PJ_LOT_V view.
|
||||
Provides functions to query WIP data from DW_MES_LOT_V view.
|
||||
This view provides real-time WIP information updated every 5 minutes.
|
||||
|
||||
Now uses Redis cache when available, with fallback to Oracle direct query.
|
||||
@@ -114,8 +114,8 @@ def _build_hold_type_sql_list() -> str:
|
||||
# ============================================================
|
||||
# Data Source Configuration
|
||||
# ============================================================
|
||||
# The view DWH.DW_PJ_LOT_V must be accessed with schema prefix
|
||||
WIP_VIEW = "DWH.DW_PJ_LOT_V"
|
||||
# WIP view for real-time lot data
|
||||
WIP_VIEW = "DW_MES_LOT_V"
|
||||
|
||||
|
||||
# ============================================================
|
||||
|
||||
@@ -281,10 +281,17 @@
|
||||
<span class="health-item-value" id="redisStatus">--</span>
|
||||
</div>
|
||||
<div class="health-cache-info" id="cacheInfo">
|
||||
<div>快取狀態:<span id="cacheEnabled">--</span></div>
|
||||
<h5 style="margin: 8px 0 4px 0; font-size: 12px; color: #667eea;">WIP 快取</h5>
|
||||
<div>狀態:<span id="cacheEnabled">--</span></div>
|
||||
<div>資料更新時間:<span id="cacheSysDate">--</span></div>
|
||||
<div>最後同步:<span id="cacheUpdatedAt">--</span></div>
|
||||
</div>
|
||||
<div class="health-cache-info" id="resourceCacheInfo">
|
||||
<h5 style="margin: 8px 0 4px 0; font-size: 12px; color: #667eea;">設備主檔快取</h5>
|
||||
<div>狀態:<span id="resourceCacheEnabled">--</span></div>
|
||||
<div>資料筆數:<span id="resourceCacheCount">--</span></div>
|
||||
<div>最後同步:<span id="resourceCacheUpdatedAt">--</span></div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="admin-status">
|
||||
{% if is_admin %}
|
||||
@@ -312,7 +319,7 @@
|
||||
<button class="tab" data-target="excelQueryFrame">Excel 批次查詢</button>
|
||||
{% endif %}
|
||||
{% if can_view_page('/resource-history') %}
|
||||
<button class="tab" data-target="resourceHistoryFrame">機台歷史分析</button>
|
||||
<button class="tab" data-target="resourceHistoryFrame">設備歷史績效</button>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
@@ -331,7 +338,7 @@
|
||||
<iframe id="excelQueryFrame" data-src="/excel-query" title="Excel 批次查詢"></iframe>
|
||||
{% endif %}
|
||||
{% if can_view_page('/resource-history') %}
|
||||
<iframe id="resourceHistoryFrame" data-src="/resource-history" title="機台歷史分析"></iframe>
|
||||
<iframe id="resourceHistoryFrame" data-src="/resource-history" title="設備歷史績效"></iframe>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
@@ -397,6 +404,9 @@
|
||||
const cacheEnabled = document.getElementById('cacheEnabled');
|
||||
const cacheSysDate = document.getElementById('cacheSysDate');
|
||||
const cacheUpdatedAt = document.getElementById('cacheUpdatedAt');
|
||||
const resourceCacheEnabled = document.getElementById('resourceCacheEnabled');
|
||||
const resourceCacheCount = document.getElementById('resourceCacheCount');
|
||||
const resourceCacheUpdatedAt = document.getElementById('resourceCacheUpdatedAt');
|
||||
|
||||
function toggleHealthPopup() {
|
||||
healthPopup.classList.toggle('show');
|
||||
@@ -469,12 +479,26 @@
|
||||
redisStatus.innerHTML = `${formatStatus(redisState)} ${redisText}`;
|
||||
setStatusClass(redisStatus, redisState);
|
||||
|
||||
// Update cache info
|
||||
// Update WIP cache info
|
||||
const cache = data.cache || {};
|
||||
cacheEnabled.textContent = cache.enabled ? '已啟用' : '未啟用';
|
||||
cacheSysDate.textContent = cache.sys_date || '--';
|
||||
cacheUpdatedAt.textContent = formatDateTime(cache.updated_at);
|
||||
|
||||
// Update resource cache info
|
||||
const resCache = data.resource_cache || {};
|
||||
if (resCache.enabled) {
|
||||
resourceCacheEnabled.textContent = resCache.loaded ? '已載入' : '未載入';
|
||||
resourceCacheEnabled.style.color = resCache.loaded ? '#22c55e' : '#f59e0b';
|
||||
resourceCacheCount.textContent = resCache.count ? `${resCache.count} 筆` : '--';
|
||||
resourceCacheUpdatedAt.textContent = formatDateTime(resCache.updated_at);
|
||||
} else {
|
||||
resourceCacheEnabled.textContent = '未啟用';
|
||||
resourceCacheEnabled.style.color = '#9ca3af';
|
||||
resourceCacheCount.textContent = '--';
|
||||
resourceCacheUpdatedAt.textContent = '--';
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
console.error('Health check failed:', error);
|
||||
healthDot.classList.remove('loading', 'healthy', 'degraded');
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{% extends "_base.html" %}
|
||||
|
||||
{% block title %}機台歷史表現分析{% endblock %}
|
||||
{% block title %}設備歷史績效{% endblock %}
|
||||
|
||||
{% block head_extra %}
|
||||
<script src="/static/js/echarts.min.js"></script>
|
||||
@@ -529,7 +529,7 @@
|
||||
<div class="dashboard">
|
||||
<!-- Header -->
|
||||
<div class="header">
|
||||
<h1>機台歷史表現分析</h1>
|
||||
<h1>設備歷史績效</h1>
|
||||
</div>
|
||||
|
||||
<!-- Filter Bar -->
|
||||
|
||||
250
tests/e2e/test_resource_cache_e2e.py
Normal file
250
tests/e2e/test_resource_cache_e2e.py
Normal file
@@ -0,0 +1,250 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""End-to-end tests for Resource Cache functionality.
|
||||
|
||||
These tests require a running server with Redis enabled.
|
||||
Run with: pytest tests/e2e/test_resource_cache_e2e.py -v --run-e2e
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import requests
|
||||
|
||||
|
||||
@pytest.mark.e2e
|
||||
class TestHealthEndpointResourceCacheE2E:
|
||||
"""E2E tests for /health endpoint resource cache status."""
|
||||
|
||||
def test_health_includes_resource_cache(self, health_url):
|
||||
"""Test health endpoint includes resource_cache field."""
|
||||
response = requests.get(health_url, timeout=10)
|
||||
|
||||
assert response.status_code in [200, 503]
|
||||
data = response.json()
|
||||
assert 'resource_cache' in data
|
||||
|
||||
def test_resource_cache_has_required_fields(self, health_url):
|
||||
"""Test resource_cache has all required fields."""
|
||||
response = requests.get(health_url, timeout=10)
|
||||
data = response.json()
|
||||
|
||||
rc = data['resource_cache']
|
||||
assert 'enabled' in rc
|
||||
|
||||
if rc['enabled']:
|
||||
assert 'loaded' in rc
|
||||
assert 'count' in rc
|
||||
assert 'version' in rc
|
||||
assert 'updated_at' in rc
|
||||
|
||||
def test_resource_cache_loaded_has_positive_count(self, health_url):
|
||||
"""Test resource cache has positive count when loaded."""
|
||||
response = requests.get(health_url, timeout=10)
|
||||
data = response.json()
|
||||
|
||||
rc = data['resource_cache']
|
||||
if rc.get('enabled') and rc.get('loaded'):
|
||||
assert rc['count'] > 0, "Resource cache should have data when loaded"
|
||||
|
||||
|
||||
@pytest.mark.e2e
|
||||
@pytest.mark.redis
|
||||
class TestResourceHistoryOptionsE2E:
|
||||
"""E2E tests for resource history filter options endpoint."""
|
||||
|
||||
def test_options_endpoint_accessible(self, api_base_url):
|
||||
"""Test resource history options endpoint is accessible."""
|
||||
response = requests.get(
|
||||
f"{api_base_url}/resource/history/options",
|
||||
timeout=30
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
|
||||
def test_options_returns_families(self, api_base_url):
|
||||
"""Test options endpoint returns families list."""
|
||||
response = requests.get(
|
||||
f"{api_base_url}/resource/history/options",
|
||||
timeout=30
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
|
||||
if data.get('success'):
|
||||
options = data.get('data', {})
|
||||
assert 'families' in options
|
||||
assert isinstance(options['families'], list)
|
||||
|
||||
def test_options_returns_workcenter_groups(self, api_base_url):
|
||||
"""Test options endpoint returns workcenter groups."""
|
||||
response = requests.get(
|
||||
f"{api_base_url}/resource/history/options",
|
||||
timeout=30
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
|
||||
if data.get('success'):
|
||||
options = data.get('data', {})
|
||||
assert 'workcenter_groups' in options
|
||||
assert isinstance(options['workcenter_groups'], list)
|
||||
|
||||
|
||||
@pytest.mark.e2e
|
||||
@pytest.mark.redis
|
||||
class TestResourceFilterOptionsE2E:
|
||||
"""E2E tests for resource filter options endpoint."""
|
||||
|
||||
def test_filter_options_endpoint_accessible(self, api_base_url):
|
||||
"""Test resource filter options endpoint is accessible."""
|
||||
response = requests.get(
|
||||
f"{api_base_url}/resource/filter_options",
|
||||
timeout=30
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
|
||||
def test_filter_options_returns_workcenters(self, api_base_url):
|
||||
"""Test filter options returns workcenters list."""
|
||||
response = requests.get(
|
||||
f"{api_base_url}/resource/filter_options",
|
||||
timeout=30
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
|
||||
if data.get('success'):
|
||||
options = data.get('data', {})
|
||||
assert 'workcenters' in options
|
||||
assert isinstance(options['workcenters'], list)
|
||||
|
||||
def test_filter_options_returns_families(self, api_base_url):
|
||||
"""Test filter options returns families list."""
|
||||
response = requests.get(
|
||||
f"{api_base_url}/resource/filter_options",
|
||||
timeout=30
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
|
||||
if data.get('success'):
|
||||
options = data.get('data', {})
|
||||
assert 'families' in options
|
||||
assert isinstance(options['families'], list)
|
||||
|
||||
def test_filter_options_returns_departments(self, api_base_url):
|
||||
"""Test filter options returns departments list."""
|
||||
response = requests.get(
|
||||
f"{api_base_url}/resource/filter_options",
|
||||
timeout=30
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
|
||||
if data.get('success'):
|
||||
options = data.get('data', {})
|
||||
assert 'departments' in options
|
||||
assert isinstance(options['departments'], list)
|
||||
|
||||
def test_filter_options_returns_statuses(self, api_base_url):
|
||||
"""Test filter options returns statuses list (from Oracle)."""
|
||||
response = requests.get(
|
||||
f"{api_base_url}/resource/filter_options",
|
||||
timeout=30
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
|
||||
if data.get('success'):
|
||||
options = data.get('data', {})
|
||||
assert 'statuses' in options
|
||||
assert isinstance(options['statuses'], list)
|
||||
|
||||
|
||||
@pytest.mark.e2e
|
||||
@pytest.mark.redis
|
||||
class TestResourceCachePerformanceE2E:
|
||||
"""E2E tests for resource cache performance."""
|
||||
|
||||
def test_filter_options_response_time(self, api_base_url):
|
||||
"""Test filter options responds within acceptable time."""
|
||||
import time
|
||||
|
||||
# First request may trigger cache load
|
||||
requests.get(f"{api_base_url}/resource/filter_options", timeout=30)
|
||||
|
||||
# Second request should be from cache
|
||||
start = time.time()
|
||||
response = requests.get(f"{api_base_url}/resource/filter_options", timeout=30)
|
||||
elapsed = time.time() - start
|
||||
|
||||
assert response.status_code == 200
|
||||
# Note: statuses still queries Oracle, so allow more time
|
||||
# Other fields (workcenters, families, departments) come from Redis cache
|
||||
assert elapsed < 30.0, f"Response took {elapsed:.2f}s, expected < 30s"
|
||||
|
||||
def test_history_options_response_time(self, api_base_url):
|
||||
"""Test history options responds within acceptable time."""
|
||||
import time
|
||||
|
||||
# First request
|
||||
requests.get(f"{api_base_url}/resource/history/options", timeout=30)
|
||||
|
||||
# Second request should be from cache
|
||||
start = time.time()
|
||||
response = requests.get(f"{api_base_url}/resource/history/options", timeout=30)
|
||||
elapsed = time.time() - start
|
||||
|
||||
assert response.status_code == 200
|
||||
# Should be fast (< 2 seconds)
|
||||
assert elapsed < 2.0, f"Response took {elapsed:.2f}s, expected < 2s"
|
||||
|
||||
|
||||
@pytest.mark.e2e
|
||||
@pytest.mark.redis
|
||||
class TestResourceCacheDataConsistencyE2E:
|
||||
"""E2E tests for resource cache data consistency."""
|
||||
|
||||
def test_cache_count_matches_health_report(self, health_url, api_base_url):
|
||||
"""Test cache count in health matches actual data count."""
|
||||
# Get health status
|
||||
health_resp = requests.get(health_url, timeout=10)
|
||||
health_data = health_resp.json()
|
||||
|
||||
rc = health_data.get('resource_cache', {})
|
||||
if not rc.get('enabled') or not rc.get('loaded'):
|
||||
pytest.skip("Resource cache not enabled or loaded")
|
||||
|
||||
reported_count = rc.get('count', 0)
|
||||
|
||||
# Get filter options which uses cached data
|
||||
options_resp = requests.get(f"{api_base_url}/resource/filter_options", timeout=30)
|
||||
options_data = options_resp.json()
|
||||
|
||||
# The workcenters list should be derived from the same cache
|
||||
if options_data.get('success'):
|
||||
workcenters = options_data.get('data', {}).get('workcenters', [])
|
||||
# Just verify we got data - exact count comparison is complex
|
||||
assert len(workcenters) > 0 or reported_count == 0
|
||||
|
||||
def test_families_consistent_across_endpoints(self, api_base_url):
|
||||
"""Test families list is consistent across endpoints."""
|
||||
# Get from resource filter options
|
||||
filter_resp = requests.get(f"{api_base_url}/resource/filter_options", timeout=30)
|
||||
filter_data = filter_resp.json()
|
||||
|
||||
# Get from resource history options
|
||||
history_resp = requests.get(f"{api_base_url}/resource/history/options", timeout=30)
|
||||
history_data = history_resp.json()
|
||||
|
||||
if filter_data.get('success') and history_data.get('success'):
|
||||
filter_families = set(filter_data.get('data', {}).get('families', []))
|
||||
history_families = set(history_data.get('data', {}).get('families', []))
|
||||
|
||||
# Both should return the same families (from same cache)
|
||||
assert filter_families == history_families, \
|
||||
f"Families mismatch: filter has {len(filter_families)}, history has {len(history_families)}"
|
||||
@@ -43,7 +43,7 @@ class TestResourceHistoryPageAccess:
|
||||
|
||||
assert response.status_code == 200
|
||||
content = response.data.decode('utf-8')
|
||||
assert '機台歷史表現分析' in content
|
||||
assert '設備歷史績效' in content
|
||||
|
||||
def test_page_contains_filter_elements(self, client):
|
||||
"""Page should contain all filter elements."""
|
||||
@@ -304,7 +304,7 @@ class TestResourceHistoryNavigation:
|
||||
response = client.get('/')
|
||||
content = response.data.decode('utf-8')
|
||||
|
||||
assert '機台歷史分析' in content
|
||||
assert '設備歷史績效' in content
|
||||
assert 'resourceHistoryFrame' in content
|
||||
|
||||
|
||||
|
||||
@@ -189,6 +189,165 @@ class TestWipApiWithCache:
|
||||
assert len(data) == 2 # PKG1 and PKG2
|
||||
|
||||
|
||||
class TestHealthEndpointResourceCache:
|
||||
"""Test /health endpoint resource cache status."""
|
||||
|
||||
@patch('mes_dashboard.routes.health_routes.check_database')
|
||||
@patch('mes_dashboard.routes.health_routes.check_redis')
|
||||
@patch('mes_dashboard.routes.health_routes.get_cache_status')
|
||||
@patch('mes_dashboard.routes.health_routes.get_resource_cache_status')
|
||||
def test_health_includes_resource_cache(
|
||||
self, mock_res_cache_status, mock_cache_status, mock_check_redis, mock_check_db, app_with_mock_cache
|
||||
):
|
||||
"""Test health endpoint includes resource_cache field."""
|
||||
mock_check_db.return_value = ('ok', None)
|
||||
mock_check_redis.return_value = ('ok', None)
|
||||
mock_cache_status.return_value = {
|
||||
'enabled': True,
|
||||
'sys_date': '2024-01-15 10:30:00',
|
||||
'updated_at': '2024-01-15T10:30:00'
|
||||
}
|
||||
mock_res_cache_status.return_value = {
|
||||
'enabled': True,
|
||||
'loaded': True,
|
||||
'count': 1500,
|
||||
'version': '2024-01-15T10:00:00',
|
||||
'updated_at': '2024-01-15T10:30:00'
|
||||
}
|
||||
|
||||
with app_with_mock_cache.test_client() as client:
|
||||
response = client.get('/health')
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.get_json()
|
||||
assert 'resource_cache' in data
|
||||
assert data['resource_cache']['enabled'] is True
|
||||
assert data['resource_cache']['loaded'] is True
|
||||
assert data['resource_cache']['count'] == 1500
|
||||
|
||||
@patch('mes_dashboard.routes.health_routes.check_database')
|
||||
@patch('mes_dashboard.routes.health_routes.check_redis')
|
||||
@patch('mes_dashboard.routes.health_routes.get_cache_status')
|
||||
@patch('mes_dashboard.routes.health_routes.get_resource_cache_status')
|
||||
def test_health_warning_when_resource_cache_not_loaded(
|
||||
self, mock_res_cache_status, mock_cache_status, mock_check_redis, mock_check_db, app_with_mock_cache
|
||||
):
|
||||
"""Test health endpoint shows warning when resource cache enabled but not loaded."""
|
||||
mock_check_db.return_value = ('ok', None)
|
||||
mock_check_redis.return_value = ('ok', None)
|
||||
mock_cache_status.return_value = {
|
||||
'enabled': True,
|
||||
'sys_date': '2024-01-15 10:30:00',
|
||||
'updated_at': '2024-01-15T10:30:00'
|
||||
}
|
||||
mock_res_cache_status.return_value = {
|
||||
'enabled': True,
|
||||
'loaded': False,
|
||||
'count': 0,
|
||||
'version': None,
|
||||
'updated_at': None
|
||||
}
|
||||
|
||||
with app_with_mock_cache.test_client() as client:
|
||||
response = client.get('/health')
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.get_json()
|
||||
assert 'warnings' in data
|
||||
assert any('Resource cache not loaded' in w for w in data['warnings'])
|
||||
|
||||
@patch('mes_dashboard.routes.health_routes.check_database')
|
||||
@patch('mes_dashboard.routes.health_routes.check_redis')
|
||||
@patch('mes_dashboard.routes.health_routes.get_cache_status')
|
||||
@patch('mes_dashboard.routes.health_routes.get_resource_cache_status')
|
||||
def test_health_no_warning_when_resource_cache_disabled(
|
||||
self, mock_res_cache_status, mock_cache_status, mock_check_redis, mock_check_db, app_with_mock_cache
|
||||
):
|
||||
"""Test health endpoint no warning when resource cache is disabled."""
|
||||
mock_check_db.return_value = ('ok', None)
|
||||
mock_check_redis.return_value = ('ok', None)
|
||||
mock_cache_status.return_value = {
|
||||
'enabled': True,
|
||||
'sys_date': '2024-01-15 10:30:00',
|
||||
'updated_at': '2024-01-15T10:30:00'
|
||||
}
|
||||
mock_res_cache_status.return_value = {'enabled': False}
|
||||
|
||||
with app_with_mock_cache.test_client() as client:
|
||||
response = client.get('/health')
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.get_json()
|
||||
# No warnings about resource cache
|
||||
warnings = data.get('warnings', [])
|
||||
assert not any('Resource cache' in w for w in warnings)
|
||||
|
||||
|
||||
class TestResourceFilterOptionsWithCache:
|
||||
"""Test resource filter options with cache."""
|
||||
|
||||
@patch('mes_dashboard.services.resource_cache.get_all_resources')
|
||||
@patch('mes_dashboard.services.resource_service.read_sql_df')
|
||||
def test_filter_options_uses_resource_cache(
|
||||
self, mock_read_sql, mock_get_all, app_with_mock_cache
|
||||
):
|
||||
"""Test resource filter options uses resource_cache for static data."""
|
||||
# Mock resource cache data
|
||||
mock_get_all.return_value = [
|
||||
{'WORKCENTERNAME': 'WC1', 'RESOURCEFAMILYNAME': 'F1', 'PJ_DEPARTMENT': 'Dept1',
|
||||
'LOCATIONNAME': 'Loc1', 'PJ_ASSETSSTATUS': 'Active'},
|
||||
{'WORKCENTERNAME': 'WC2', 'RESOURCEFAMILYNAME': 'F2', 'PJ_DEPARTMENT': 'Dept1',
|
||||
'LOCATIONNAME': 'Loc1', 'PJ_ASSETSSTATUS': 'Active'},
|
||||
]
|
||||
mock_read_sql.return_value = pd.DataFrame({'NEWSTATUSNAME': ['PRD', 'SBY']})
|
||||
|
||||
with app_with_mock_cache.test_client() as client:
|
||||
response = client.get('/api/resource/filter_options')
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.get_json()
|
||||
|
||||
if data.get('success'):
|
||||
options = data.get('data', {})
|
||||
assert 'WC1' in options['workcenters']
|
||||
assert 'WC2' in options['workcenters']
|
||||
assert 'F1' in options['families']
|
||||
assert 'F2' in options['families']
|
||||
|
||||
|
||||
class TestResourceHistoryOptionsWithCache:
|
||||
"""Test resource history filter options with cache."""
|
||||
|
||||
@patch('mes_dashboard.services.filter_cache.get_workcenter_groups')
|
||||
@patch('mes_dashboard.services.resource_cache.get_all_resources')
|
||||
def test_history_options_uses_resource_cache(
|
||||
self, mock_get_all, mock_groups, app_with_mock_cache
|
||||
):
|
||||
"""Test resource history options uses resource_cache for families."""
|
||||
mock_groups.return_value = [
|
||||
{'name': 'Group1', 'sequence': 1},
|
||||
{'name': 'Group2', 'sequence': 2}
|
||||
]
|
||||
# Mock resource cache data for families
|
||||
mock_get_all.return_value = [
|
||||
{'RESOURCEFAMILYNAME': 'Family1'},
|
||||
{'RESOURCEFAMILYNAME': 'Family2'},
|
||||
{'RESOURCEFAMILYNAME': 'Family1'}, # duplicate
|
||||
]
|
||||
|
||||
with app_with_mock_cache.test_client() as client:
|
||||
response = client.get('/api/resource/history/options')
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.get_json()
|
||||
|
||||
if data.get('success'):
|
||||
options = data.get('data', {})
|
||||
assert 'families' in options
|
||||
assert 'Family1' in options['families']
|
||||
assert 'Family2' in options['families']
|
||||
|
||||
|
||||
class TestFallbackToOracle:
|
||||
"""Test fallback to Oracle when cache is unavailable."""
|
||||
|
||||
|
||||
485
tests/test_resource_cache.py
Normal file
485
tests/test_resource_cache.py
Normal file
@@ -0,0 +1,485 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""Unit tests for resource_cache module.
|
||||
|
||||
Tests cache read/write functionality, fallback mechanism, and distinct values API.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
import pandas as pd
|
||||
import json
|
||||
|
||||
|
||||
class TestGetDistinctValues:
|
||||
"""Test get_distinct_values function."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def reset_modules(self):
|
||||
"""Reset module state before each test."""
|
||||
import mes_dashboard.core.redis_client as rc
|
||||
rc._REDIS_CLIENT = None
|
||||
yield
|
||||
rc._REDIS_CLIENT = None
|
||||
|
||||
def test_returns_sorted_unique_values(self):
|
||||
"""Test returns sorted unique values from resources."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_resources = [
|
||||
{'WORKCENTERNAME': 'Station_B', 'RESOURCEFAMILYNAME': 'Family1'},
|
||||
{'WORKCENTERNAME': 'Station_A', 'RESOURCEFAMILYNAME': 'Family2'},
|
||||
{'WORKCENTERNAME': 'Station_B', 'RESOURCEFAMILYNAME': 'Family1'}, # duplicate
|
||||
{'WORKCENTERNAME': 'Station_C', 'RESOURCEFAMILYNAME': None}, # None value
|
||||
]
|
||||
|
||||
with patch.object(rc, 'get_all_resources', return_value=mock_resources):
|
||||
result = rc.get_distinct_values('WORKCENTERNAME')
|
||||
|
||||
assert result == ['Station_A', 'Station_B', 'Station_C']
|
||||
|
||||
def test_excludes_none_and_empty_strings(self):
|
||||
"""Test excludes None and empty string values."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_resources = [
|
||||
{'RESOURCEFAMILYNAME': 'Family1'},
|
||||
{'RESOURCEFAMILYNAME': None},
|
||||
{'RESOURCEFAMILYNAME': ''},
|
||||
{'RESOURCEFAMILYNAME': 'Family2'},
|
||||
]
|
||||
|
||||
with patch.object(rc, 'get_all_resources', return_value=mock_resources):
|
||||
result = rc.get_distinct_values('RESOURCEFAMILYNAME')
|
||||
|
||||
assert result == ['Family1', 'Family2']
|
||||
|
||||
def test_handles_nan_values(self):
|
||||
"""Test handles NaN values (pandas float NaN)."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
import numpy as np
|
||||
|
||||
mock_resources = [
|
||||
{'WORKCENTERNAME': 'Station_A'},
|
||||
{'WORKCENTERNAME': float('nan')}, # NaN
|
||||
{'WORKCENTERNAME': np.nan}, # NumPy NaN
|
||||
{'WORKCENTERNAME': 'Station_B'},
|
||||
]
|
||||
|
||||
with patch.object(rc, 'get_all_resources', return_value=mock_resources):
|
||||
result = rc.get_distinct_values('WORKCENTERNAME')
|
||||
|
||||
assert result == ['Station_A', 'Station_B']
|
||||
|
||||
def test_handles_mixed_types(self):
|
||||
"""Test handles mixed types (converts to string)."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_resources = [
|
||||
{'PJ_DEPARTMENT': 'Dept_A'},
|
||||
{'PJ_DEPARTMENT': 123}, # int
|
||||
{'PJ_DEPARTMENT': 'Dept_B'},
|
||||
]
|
||||
|
||||
with patch.object(rc, 'get_all_resources', return_value=mock_resources):
|
||||
result = rc.get_distinct_values('PJ_DEPARTMENT')
|
||||
|
||||
assert '123' in result
|
||||
assert 'Dept_A' in result
|
||||
assert 'Dept_B' in result
|
||||
|
||||
def test_returns_empty_list_when_no_resources(self):
|
||||
"""Test returns empty list when no resources."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
with patch.object(rc, 'get_all_resources', return_value=[]):
|
||||
result = rc.get_distinct_values('WORKCENTERNAME')
|
||||
|
||||
assert result == []
|
||||
|
||||
|
||||
class TestConvenienceMethods:
|
||||
"""Test convenience methods for common columns."""
|
||||
|
||||
def test_get_resource_families_calls_get_distinct_values(self):
|
||||
"""Test get_resource_families calls get_distinct_values with correct column."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
with patch.object(rc, 'get_distinct_values', return_value=['Family1', 'Family2']) as mock:
|
||||
result = rc.get_resource_families()
|
||||
|
||||
mock.assert_called_once_with('RESOURCEFAMILYNAME')
|
||||
assert result == ['Family1', 'Family2']
|
||||
|
||||
def test_get_workcenters_calls_get_distinct_values(self):
|
||||
"""Test get_workcenters calls get_distinct_values with correct column."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
with patch.object(rc, 'get_distinct_values', return_value=['WC1', 'WC2']) as mock:
|
||||
result = rc.get_workcenters()
|
||||
|
||||
mock.assert_called_once_with('WORKCENTERNAME')
|
||||
assert result == ['WC1', 'WC2']
|
||||
|
||||
def test_get_departments_calls_get_distinct_values(self):
|
||||
"""Test get_departments calls get_distinct_values with correct column."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
with patch.object(rc, 'get_distinct_values', return_value=['Dept1', 'Dept2']) as mock:
|
||||
result = rc.get_departments()
|
||||
|
||||
mock.assert_called_once_with('PJ_DEPARTMENT')
|
||||
assert result == ['Dept1', 'Dept2']
|
||||
|
||||
|
||||
class TestGetAllResources:
|
||||
"""Test get_all_resources function."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def reset_modules(self):
|
||||
"""Reset module state before each test."""
|
||||
import mes_dashboard.core.redis_client as rc
|
||||
rc._REDIS_CLIENT = None
|
||||
yield
|
||||
rc._REDIS_CLIENT = None
|
||||
|
||||
def test_returns_cached_data_when_available(self):
|
||||
"""Test returns cached data from Redis when available."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
test_data = [
|
||||
{'RESOURCEID': 'R001', 'RESOURCENAME': 'Machine1'},
|
||||
{'RESOURCEID': 'R002', 'RESOURCENAME': 'Machine2'}
|
||||
]
|
||||
cached_json = json.dumps(test_data)
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.get.return_value = cached_json
|
||||
|
||||
with patch.object(rc, 'REDIS_ENABLED', True):
|
||||
with patch.object(rc, 'RESOURCE_CACHE_ENABLED', True):
|
||||
with patch.object(rc, 'get_redis_client', return_value=mock_client):
|
||||
result = rc.get_all_resources()
|
||||
|
||||
assert len(result) == 2
|
||||
assert result[0]['RESOURCEID'] == 'R001'
|
||||
|
||||
def test_falls_back_to_oracle_when_cache_miss(self):
|
||||
"""Test falls back to Oracle when cache is empty."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.get.return_value = None
|
||||
|
||||
oracle_df = pd.DataFrame({
|
||||
'RESOURCEID': ['R001'],
|
||||
'RESOURCENAME': ['Machine1']
|
||||
})
|
||||
|
||||
with patch.object(rc, 'REDIS_ENABLED', True):
|
||||
with patch.object(rc, 'RESOURCE_CACHE_ENABLED', True):
|
||||
with patch.object(rc, 'get_redis_client', return_value=mock_client):
|
||||
with patch.object(rc, '_load_from_oracle', return_value=oracle_df):
|
||||
result = rc.get_all_resources()
|
||||
|
||||
assert len(result) == 1
|
||||
assert result[0]['RESOURCEID'] == 'R001'
|
||||
|
||||
def test_returns_empty_when_both_unavailable(self):
|
||||
"""Test returns empty list when both cache and Oracle fail."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.get.return_value = None
|
||||
|
||||
with patch.object(rc, 'REDIS_ENABLED', True):
|
||||
with patch.object(rc, 'RESOURCE_CACHE_ENABLED', True):
|
||||
with patch.object(rc, 'get_redis_client', return_value=mock_client):
|
||||
with patch.object(rc, '_load_from_oracle', return_value=None):
|
||||
result = rc.get_all_resources()
|
||||
|
||||
assert result == []
|
||||
|
||||
|
||||
class TestGetResourceById:
|
||||
"""Test get_resource_by_id function."""
|
||||
|
||||
def test_returns_matching_resource(self):
|
||||
"""Test returns resource with matching ID."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_resources = [
|
||||
{'RESOURCEID': 'R001', 'RESOURCENAME': 'Machine1'},
|
||||
{'RESOURCEID': 'R002', 'RESOURCENAME': 'Machine2'}
|
||||
]
|
||||
|
||||
with patch.object(rc, 'get_all_resources', return_value=mock_resources):
|
||||
result = rc.get_resource_by_id('R002')
|
||||
|
||||
assert result is not None
|
||||
assert result['RESOURCEID'] == 'R002'
|
||||
assert result['RESOURCENAME'] == 'Machine2'
|
||||
|
||||
def test_returns_none_when_not_found(self):
|
||||
"""Test returns None when ID not found."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_resources = [
|
||||
{'RESOURCEID': 'R001', 'RESOURCENAME': 'Machine1'}
|
||||
]
|
||||
|
||||
with patch.object(rc, 'get_all_resources', return_value=mock_resources):
|
||||
result = rc.get_resource_by_id('R999')
|
||||
|
||||
assert result is None
|
||||
|
||||
|
||||
class TestGetResourcesByIds:
|
||||
"""Test get_resources_by_ids function."""
|
||||
|
||||
def test_returns_matching_resources(self):
|
||||
"""Test returns all resources with matching IDs."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_resources = [
|
||||
{'RESOURCEID': 'R001', 'RESOURCENAME': 'Machine1'},
|
||||
{'RESOURCEID': 'R002', 'RESOURCENAME': 'Machine2'},
|
||||
{'RESOURCEID': 'R003', 'RESOURCENAME': 'Machine3'}
|
||||
]
|
||||
|
||||
with patch.object(rc, 'get_all_resources', return_value=mock_resources):
|
||||
result = rc.get_resources_by_ids(['R001', 'R003'])
|
||||
|
||||
assert len(result) == 2
|
||||
ids = [r['RESOURCEID'] for r in result]
|
||||
assert 'R001' in ids
|
||||
assert 'R003' in ids
|
||||
|
||||
def test_ignores_missing_ids(self):
|
||||
"""Test ignores IDs that don't exist."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_resources = [
|
||||
{'RESOURCEID': 'R001', 'RESOURCENAME': 'Machine1'}
|
||||
]
|
||||
|
||||
with patch.object(rc, 'get_all_resources', return_value=mock_resources):
|
||||
result = rc.get_resources_by_ids(['R001', 'R999'])
|
||||
|
||||
assert len(result) == 1
|
||||
assert result[0]['RESOURCEID'] == 'R001'
|
||||
|
||||
|
||||
class TestGetResourcesByFilter:
|
||||
"""Test get_resources_by_filter function."""
|
||||
|
||||
def test_filters_by_workcenter(self):
|
||||
"""Test filters resources by workcenter."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_resources = [
|
||||
{'RESOURCEID': 'R001', 'WORKCENTERNAME': 'WC1'},
|
||||
{'RESOURCEID': 'R002', 'WORKCENTERNAME': 'WC2'},
|
||||
{'RESOURCEID': 'R003', 'WORKCENTERNAME': 'WC1'}
|
||||
]
|
||||
|
||||
with patch.object(rc, 'get_all_resources', return_value=mock_resources):
|
||||
result = rc.get_resources_by_filter(workcenters=['WC1'])
|
||||
|
||||
assert len(result) == 2
|
||||
|
||||
def test_filters_by_family(self):
|
||||
"""Test filters resources by family."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_resources = [
|
||||
{'RESOURCEID': 'R001', 'RESOURCEFAMILYNAME': 'F1'},
|
||||
{'RESOURCEID': 'R002', 'RESOURCEFAMILYNAME': 'F2'}
|
||||
]
|
||||
|
||||
with patch.object(rc, 'get_all_resources', return_value=mock_resources):
|
||||
result = rc.get_resources_by_filter(families=['F1'])
|
||||
|
||||
assert len(result) == 1
|
||||
assert result[0]['RESOURCEFAMILYNAME'] == 'F1'
|
||||
|
||||
def test_filters_by_production_flag(self):
|
||||
"""Test filters resources by production flag."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_resources = [
|
||||
{'RESOURCEID': 'R001', 'PJ_ISPRODUCTION': 1},
|
||||
{'RESOURCEID': 'R002', 'PJ_ISPRODUCTION': 0},
|
||||
{'RESOURCEID': 'R003', 'PJ_ISPRODUCTION': 1}
|
||||
]
|
||||
|
||||
with patch.object(rc, 'get_all_resources', return_value=mock_resources):
|
||||
result = rc.get_resources_by_filter(is_production=True)
|
||||
|
||||
assert len(result) == 2
|
||||
|
||||
def test_combines_multiple_filters(self):
|
||||
"""Test combines multiple filter criteria."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_resources = [
|
||||
{'RESOURCEID': 'R001', 'WORKCENTERNAME': 'WC1', 'RESOURCEFAMILYNAME': 'F1'},
|
||||
{'RESOURCEID': 'R002', 'WORKCENTERNAME': 'WC1', 'RESOURCEFAMILYNAME': 'F2'},
|
||||
{'RESOURCEID': 'R003', 'WORKCENTERNAME': 'WC2', 'RESOURCEFAMILYNAME': 'F1'}
|
||||
]
|
||||
|
||||
with patch.object(rc, 'get_all_resources', return_value=mock_resources):
|
||||
result = rc.get_resources_by_filter(workcenters=['WC1'], families=['F1'])
|
||||
|
||||
assert len(result) == 1
|
||||
assert result[0]['RESOURCEID'] == 'R001'
|
||||
|
||||
|
||||
class TestGetCacheStatus:
|
||||
"""Test get_cache_status function."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def reset_modules(self):
|
||||
"""Reset module state before each test."""
|
||||
import mes_dashboard.core.redis_client as rc
|
||||
rc._REDIS_CLIENT = None
|
||||
yield
|
||||
rc._REDIS_CLIENT = None
|
||||
|
||||
def test_returns_disabled_when_cache_disabled(self):
|
||||
"""Test returns disabled status when cache is disabled."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
with patch.object(rc, 'REDIS_ENABLED', False):
|
||||
result = rc.get_cache_status()
|
||||
|
||||
assert result['enabled'] is False
|
||||
assert result['loaded'] is False
|
||||
|
||||
def test_returns_loaded_status_when_data_exists(self):
|
||||
"""Test returns loaded status when cache has data."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.exists.return_value = 1
|
||||
mock_client.get.side_effect = lambda key: {
|
||||
'mes_wip:resource:meta:count': '1000',
|
||||
'mes_wip:resource:meta:version': '2024-01-15T10:00:00',
|
||||
'mes_wip:resource:meta:updated': '2024-01-15T10:30:00',
|
||||
}.get(key)
|
||||
|
||||
with patch.object(rc, 'REDIS_ENABLED', True):
|
||||
with patch.object(rc, 'RESOURCE_CACHE_ENABLED', True):
|
||||
with patch.object(rc, 'get_redis_client', return_value=mock_client):
|
||||
result = rc.get_cache_status()
|
||||
|
||||
assert result['enabled'] is True
|
||||
assert result['loaded'] is True
|
||||
|
||||
|
||||
class TestRefreshCache:
|
||||
"""Test refresh_cache function."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def reset_modules(self):
|
||||
"""Reset module state before each test."""
|
||||
import mes_dashboard.core.redis_client as rc
|
||||
rc._REDIS_CLIENT = None
|
||||
yield
|
||||
rc._REDIS_CLIENT = None
|
||||
|
||||
def test_returns_false_when_disabled(self):
|
||||
"""Test returns False when cache is disabled."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
with patch.object(rc, 'REDIS_ENABLED', False):
|
||||
result = rc.refresh_cache()
|
||||
|
||||
assert result is False
|
||||
|
||||
def test_skips_sync_when_version_unchanged(self):
|
||||
"""Test skips sync when Oracle version matches Redis version."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.get.return_value = '2024-01-15T10:00:00'
|
||||
mock_client.ping.return_value = True
|
||||
|
||||
with patch.object(rc, 'REDIS_ENABLED', True):
|
||||
with patch.object(rc, 'RESOURCE_CACHE_ENABLED', True):
|
||||
with patch.object(rc, 'redis_available', return_value=True):
|
||||
with patch.object(rc, '_get_version_from_oracle', return_value='2024-01-15T10:00:00'):
|
||||
with patch.object(rc, '_get_version_from_redis', return_value='2024-01-15T10:00:00'):
|
||||
result = rc.refresh_cache(force=False)
|
||||
|
||||
assert result is False
|
||||
|
||||
def test_syncs_when_version_changed(self):
|
||||
"""Test syncs when Oracle version differs from Redis version."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_df = pd.DataFrame({
|
||||
'RESOURCEID': ['R001'],
|
||||
'RESOURCENAME': ['Machine1']
|
||||
})
|
||||
|
||||
with patch.object(rc, 'REDIS_ENABLED', True):
|
||||
with patch.object(rc, 'RESOURCE_CACHE_ENABLED', True):
|
||||
with patch.object(rc, 'redis_available', return_value=True):
|
||||
with patch.object(rc, '_get_version_from_oracle', return_value='2024-01-15T11:00:00'):
|
||||
with patch.object(rc, '_get_version_from_redis', return_value='2024-01-15T10:00:00'):
|
||||
with patch.object(rc, '_load_from_oracle', return_value=mock_df):
|
||||
with patch.object(rc, '_sync_to_redis', return_value=True) as mock_sync:
|
||||
result = rc.refresh_cache(force=False)
|
||||
|
||||
assert result is True
|
||||
mock_sync.assert_called_once()
|
||||
|
||||
def test_force_sync_ignores_version(self):
|
||||
"""Test force sync ignores version comparison."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
mock_df = pd.DataFrame({
|
||||
'RESOURCEID': ['R001'],
|
||||
'RESOURCENAME': ['Machine1']
|
||||
})
|
||||
|
||||
with patch.object(rc, 'REDIS_ENABLED', True):
|
||||
with patch.object(rc, 'RESOURCE_CACHE_ENABLED', True):
|
||||
with patch.object(rc, 'redis_available', return_value=True):
|
||||
with patch.object(rc, '_get_version_from_oracle', return_value='2024-01-15T10:00:00'):
|
||||
with patch.object(rc, '_get_version_from_redis', return_value='2024-01-15T10:00:00'):
|
||||
with patch.object(rc, '_load_from_oracle', return_value=mock_df):
|
||||
with patch.object(rc, '_sync_to_redis', return_value=True) as mock_sync:
|
||||
result = rc.refresh_cache(force=True)
|
||||
|
||||
assert result is True
|
||||
mock_sync.assert_called_once()
|
||||
|
||||
|
||||
class TestBuildFilterSql:
|
||||
"""Test _build_filter_sql function."""
|
||||
|
||||
def test_includes_equipment_type_filter(self):
|
||||
"""Test includes equipment type filter."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
sql = rc._build_filter_sql()
|
||||
|
||||
assert 'OBJECTCATEGORY' in sql
|
||||
assert 'ASSEMBLY' in sql or 'WAFERSORT' in sql
|
||||
|
||||
def test_includes_location_filter(self):
|
||||
"""Test includes location exclusion filter."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
sql = rc._build_filter_sql()
|
||||
|
||||
assert 'LOCATIONNAME' in sql
|
||||
|
||||
def test_includes_asset_status_filter(self):
|
||||
"""Test includes asset status exclusion filter."""
|
||||
import mes_dashboard.services.resource_cache as rc
|
||||
|
||||
sql = rc._build_filter_sql()
|
||||
|
||||
assert 'PJ_ASSETSSTATUS' in sql
|
||||
@@ -1,11 +1,12 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""Unit tests for WIP service layer.
|
||||
|
||||
Tests the WIP query functions that use DWH.DW_PJ_LOT_V view.
|
||||
Tests the WIP query functions that use DW_MES_LOT_V view.
|
||||
"""
|
||||
|
||||
import unittest
|
||||
from unittest.mock import patch, MagicMock
|
||||
from functools import wraps
|
||||
import pandas as pd
|
||||
|
||||
from mes_dashboard.services.wip_service import (
|
||||
@@ -23,13 +24,22 @@ from mes_dashboard.services.wip_service import (
|
||||
)
|
||||
|
||||
|
||||
def disable_cache(func):
|
||||
"""Decorator to disable Redis cache for Oracle fallback tests."""
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
with patch('mes_dashboard.services.wip_service.get_cached_wip_data', return_value=None):
|
||||
with patch('mes_dashboard.services.wip_service.get_cached_sys_date', return_value=None):
|
||||
return func(*args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
|
||||
class TestWipServiceConfig(unittest.TestCase):
|
||||
"""Test WIP service configuration."""
|
||||
|
||||
def test_wip_view_has_schema_prefix(self):
|
||||
"""WIP_VIEW should include DWH schema prefix."""
|
||||
self.assertEqual(WIP_VIEW, "DWH.DW_PJ_LOT_V")
|
||||
self.assertTrue(WIP_VIEW.startswith("DWH."))
|
||||
def test_wip_view_configured(self):
|
||||
"""WIP_VIEW should be configured correctly."""
|
||||
self.assertEqual(WIP_VIEW, "DW_MES_LOT_V")
|
||||
|
||||
|
||||
class TestEscapeSql(unittest.TestCase):
|
||||
@@ -91,6 +101,7 @@ class TestBuildBaseConditions(unittest.TestCase):
|
||||
class TestGetWipSummary(unittest.TestCase):
|
||||
"""Test get_wip_summary function."""
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_none_on_empty_result(self, mock_read_sql):
|
||||
"""Should return None when query returns empty DataFrame."""
|
||||
@@ -100,6 +111,7 @@ class TestGetWipSummary(unittest.TestCase):
|
||||
|
||||
self.assertIsNone(result)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_none_on_exception(self, mock_read_sql):
|
||||
"""Should return None when query raises exception."""
|
||||
@@ -114,6 +126,7 @@ class TestGetWipSummary(unittest.TestCase):
|
||||
class TestGetWipMatrix(unittest.TestCase):
|
||||
"""Test get_wip_matrix function."""
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_matrix_structure(self, mock_read_sql):
|
||||
"""Should return dict with matrix structure."""
|
||||
@@ -135,6 +148,7 @@ class TestGetWipMatrix(unittest.TestCase):
|
||||
self.assertIn('package_totals', result)
|
||||
self.assertIn('grand_total', result)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_workcenters_sorted_by_sequence(self, mock_read_sql):
|
||||
"""Workcenters should be sorted by WORKCENTERSEQUENCE_GROUP."""
|
||||
@@ -150,6 +164,7 @@ class TestGetWipMatrix(unittest.TestCase):
|
||||
|
||||
self.assertEqual(result['workcenters'], ['切割', '焊接_DB'])
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_packages_sorted_by_qty_desc(self, mock_read_sql):
|
||||
"""Packages should be sorted by total QTY descending."""
|
||||
@@ -165,6 +180,7 @@ class TestGetWipMatrix(unittest.TestCase):
|
||||
|
||||
self.assertEqual(result['packages'][0], 'SOT-23') # Higher QTY first
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_empty_structure_on_empty_result(self, mock_read_sql):
|
||||
"""Should return empty structure when no data."""
|
||||
@@ -177,6 +193,7 @@ class TestGetWipMatrix(unittest.TestCase):
|
||||
self.assertEqual(result['packages'], [])
|
||||
self.assertEqual(result['grand_total'], 0)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_calculates_totals_correctly(self, mock_read_sql):
|
||||
"""Should calculate workcenter and package totals correctly."""
|
||||
@@ -198,6 +215,7 @@ class TestGetWipMatrix(unittest.TestCase):
|
||||
class TestGetWipHoldSummary(unittest.TestCase):
|
||||
"""Test get_wip_hold_summary function."""
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_hold_items(self, mock_read_sql):
|
||||
"""Should return list of hold items."""
|
||||
@@ -216,6 +234,7 @@ class TestGetWipHoldSummary(unittest.TestCase):
|
||||
self.assertEqual(result['items'][0]['reason'], 'YieldLimit')
|
||||
self.assertEqual(result['items'][0]['lots'], 21)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_empty_items_on_no_holds(self, mock_read_sql):
|
||||
"""Should return empty items list when no holds."""
|
||||
@@ -230,6 +249,7 @@ class TestGetWipHoldSummary(unittest.TestCase):
|
||||
class TestGetWipDetail(unittest.TestCase):
|
||||
"""Test get_wip_detail function."""
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_none_on_empty_summary(self, mock_read_sql):
|
||||
"""Should return None when summary query returns empty."""
|
||||
@@ -243,6 +263,7 @@ class TestGetWipDetail(unittest.TestCase):
|
||||
class TestGetWorkcenters(unittest.TestCase):
|
||||
"""Test get_workcenters function."""
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_workcenter_list(self, mock_read_sql):
|
||||
"""Should return list of workcenters with lot counts."""
|
||||
@@ -260,6 +281,7 @@ class TestGetWorkcenters(unittest.TestCase):
|
||||
self.assertEqual(result[0]['name'], '切割')
|
||||
self.assertEqual(result[0]['lot_count'], 1377)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_empty_list_on_no_data(self, mock_read_sql):
|
||||
"""Should return empty list when no workcenters."""
|
||||
@@ -269,6 +291,7 @@ class TestGetWorkcenters(unittest.TestCase):
|
||||
|
||||
self.assertEqual(result, [])
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_none_on_exception(self, mock_read_sql):
|
||||
"""Should return None on exception."""
|
||||
@@ -282,6 +305,7 @@ class TestGetWorkcenters(unittest.TestCase):
|
||||
class TestGetPackages(unittest.TestCase):
|
||||
"""Test get_packages function."""
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_package_list(self, mock_read_sql):
|
||||
"""Should return list of packages with lot counts."""
|
||||
@@ -298,6 +322,7 @@ class TestGetPackages(unittest.TestCase):
|
||||
self.assertEqual(result[0]['name'], 'SOT-23')
|
||||
self.assertEqual(result[0]['lot_count'], 2234)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_empty_list_on_no_data(self, mock_read_sql):
|
||||
"""Should return empty list when no packages."""
|
||||
@@ -311,6 +336,7 @@ class TestGetPackages(unittest.TestCase):
|
||||
class TestSearchWorkorders(unittest.TestCase):
|
||||
"""Test search_workorders function."""
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_matching_workorders(self, mock_read_sql):
|
||||
"""Should return list of matching WORKORDER values."""
|
||||
@@ -325,6 +351,7 @@ class TestSearchWorkorders(unittest.TestCase):
|
||||
self.assertEqual(len(result), 3)
|
||||
self.assertEqual(result[0], 'GA26012001')
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_empty_list_for_short_query(self, mock_read_sql):
|
||||
"""Should return empty list for query < 2 characters."""
|
||||
@@ -333,6 +360,7 @@ class TestSearchWorkorders(unittest.TestCase):
|
||||
self.assertEqual(result, [])
|
||||
mock_read_sql.assert_not_called()
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_empty_list_for_empty_query(self, mock_read_sql):
|
||||
"""Should return empty list for empty query."""
|
||||
@@ -341,6 +369,7 @@ class TestSearchWorkorders(unittest.TestCase):
|
||||
self.assertEqual(result, [])
|
||||
mock_read_sql.assert_not_called()
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_empty_list_on_no_matches(self, mock_read_sql):
|
||||
"""Should return empty list when no matches found."""
|
||||
@@ -350,6 +379,7 @@ class TestSearchWorkorders(unittest.TestCase):
|
||||
|
||||
self.assertEqual(result, [])
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_respects_limit_parameter(self, mock_read_sql):
|
||||
"""Should respect the limit parameter."""
|
||||
@@ -362,6 +392,7 @@ class TestSearchWorkorders(unittest.TestCase):
|
||||
|
||||
self.assertEqual(len(result), 2)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_caps_limit_at_50(self, mock_read_sql):
|
||||
"""Should cap limit at 50."""
|
||||
@@ -374,6 +405,7 @@ class TestSearchWorkorders(unittest.TestCase):
|
||||
call_args = mock_read_sql.call_args[0][0]
|
||||
self.assertIn('FETCH FIRST 50 ROWS ONLY', call_args)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_none_on_exception(self, mock_read_sql):
|
||||
"""Should return None on exception."""
|
||||
@@ -383,6 +415,7 @@ class TestSearchWorkorders(unittest.TestCase):
|
||||
|
||||
self.assertIsNone(result)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_excludes_dummy_by_default(self, mock_read_sql):
|
||||
"""Should exclude DUMMY lots by default."""
|
||||
@@ -394,6 +427,7 @@ class TestSearchWorkorders(unittest.TestCase):
|
||||
call_args = mock_read_sql.call_args[0][0]
|
||||
self.assertIn("LOTID NOT LIKE '%DUMMY%'", call_args)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_includes_dummy_when_specified(self, mock_read_sql):
|
||||
"""Should include DUMMY lots when include_dummy=True."""
|
||||
@@ -409,6 +443,7 @@ class TestSearchWorkorders(unittest.TestCase):
|
||||
class TestSearchLotIds(unittest.TestCase):
|
||||
"""Test search_lot_ids function."""
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_matching_lotids(self, mock_read_sql):
|
||||
"""Should return list of matching LOTID values."""
|
||||
@@ -423,6 +458,7 @@ class TestSearchLotIds(unittest.TestCase):
|
||||
self.assertEqual(len(result), 2)
|
||||
self.assertEqual(result[0], 'GA26012345-A00-001')
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_empty_list_for_short_query(self, mock_read_sql):
|
||||
"""Should return empty list for query < 2 characters."""
|
||||
@@ -431,6 +467,7 @@ class TestSearchLotIds(unittest.TestCase):
|
||||
self.assertEqual(result, [])
|
||||
mock_read_sql.assert_not_called()
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_empty_list_on_no_matches(self, mock_read_sql):
|
||||
"""Should return empty list when no matches found."""
|
||||
@@ -440,6 +477,7 @@ class TestSearchLotIds(unittest.TestCase):
|
||||
|
||||
self.assertEqual(result, [])
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_returns_none_on_exception(self, mock_read_sql):
|
||||
"""Should return None on exception."""
|
||||
@@ -449,6 +487,7 @@ class TestSearchLotIds(unittest.TestCase):
|
||||
|
||||
self.assertIsNone(result)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_excludes_dummy_by_default(self, mock_read_sql):
|
||||
"""Should exclude DUMMY lots by default."""
|
||||
@@ -464,6 +503,7 @@ class TestSearchLotIds(unittest.TestCase):
|
||||
class TestDummyExclusionInAllFunctions(unittest.TestCase):
|
||||
"""Test DUMMY exclusion is applied in all WIP functions."""
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_get_wip_summary_excludes_dummy_by_default(self, mock_read_sql):
|
||||
"""get_wip_summary should exclude DUMMY by default."""
|
||||
@@ -485,6 +525,7 @@ class TestDummyExclusionInAllFunctions(unittest.TestCase):
|
||||
call_args = mock_read_sql.call_args[0][0]
|
||||
self.assertIn("LOTID NOT LIKE '%DUMMY%'", call_args)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_get_wip_summary_includes_dummy_when_specified(self, mock_read_sql):
|
||||
"""get_wip_summary should include DUMMY when specified."""
|
||||
@@ -506,6 +547,7 @@ class TestDummyExclusionInAllFunctions(unittest.TestCase):
|
||||
call_args = mock_read_sql.call_args[0][0]
|
||||
self.assertNotIn("LOTID NOT LIKE '%DUMMY%'", call_args)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_get_wip_matrix_excludes_dummy_by_default(self, mock_read_sql):
|
||||
"""get_wip_matrix should exclude DUMMY by default."""
|
||||
@@ -522,6 +564,7 @@ class TestDummyExclusionInAllFunctions(unittest.TestCase):
|
||||
call_args = mock_read_sql.call_args[0][0]
|
||||
self.assertIn("LOTID NOT LIKE '%DUMMY%'", call_args)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_get_wip_hold_summary_excludes_dummy_by_default(self, mock_read_sql):
|
||||
"""get_wip_hold_summary should exclude DUMMY by default."""
|
||||
@@ -535,6 +578,7 @@ class TestDummyExclusionInAllFunctions(unittest.TestCase):
|
||||
call_args = mock_read_sql.call_args[0][0]
|
||||
self.assertIn("LOTID NOT LIKE '%DUMMY%'", call_args)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_get_workcenters_excludes_dummy_by_default(self, mock_read_sql):
|
||||
"""get_workcenters should exclude DUMMY by default."""
|
||||
@@ -550,6 +594,7 @@ class TestDummyExclusionInAllFunctions(unittest.TestCase):
|
||||
call_args = mock_read_sql.call_args[0][0]
|
||||
self.assertIn("LOTID NOT LIKE '%DUMMY%'", call_args)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_get_packages_excludes_dummy_by_default(self, mock_read_sql):
|
||||
"""get_packages should exclude DUMMY by default."""
|
||||
@@ -567,6 +612,7 @@ class TestDummyExclusionInAllFunctions(unittest.TestCase):
|
||||
class TestMultipleFilterConditions(unittest.TestCase):
|
||||
"""Test multiple filter conditions work together."""
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_get_wip_summary_with_all_filters(self, mock_read_sql):
|
||||
"""get_wip_summary should combine all filter conditions."""
|
||||
@@ -590,6 +636,7 @@ class TestMultipleFilterConditions(unittest.TestCase):
|
||||
self.assertIn("LOTID LIKE '%A00%'", call_args)
|
||||
self.assertIn("LOTID NOT LIKE '%DUMMY%'", call_args)
|
||||
|
||||
@disable_cache
|
||||
@patch('mes_dashboard.services.wip_service.read_sql_df')
|
||||
def test_get_wip_matrix_with_all_filters(self, mock_read_sql):
|
||||
"""get_wip_matrix should combine all filter conditions."""
|
||||
|
||||
@@ -6,10 +6,12 @@
|
||||
import sys
|
||||
import io
|
||||
import os
|
||||
import oracledb
|
||||
import json
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
|
||||
import oracledb
|
||||
|
||||
# 设置 UTF-8 编码输出
|
||||
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', errors='replace')
|
||||
|
||||
@@ -34,7 +36,7 @@ DB_CONFIG = {
|
||||
'dsn': f'(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST={DB_HOST})(PORT={DB_PORT})))(CONNECT_DATA=(SERVICE_NAME={DB_SERVICE})))'
|
||||
}
|
||||
|
||||
# MES 表列表
|
||||
# MES 表列表(預設清單)
|
||||
MES_TABLES = [
|
||||
'DW_MES_CONTAINER',
|
||||
'DW_MES_HOLDRELEASEHISTORY',
|
||||
@@ -54,8 +56,7 @@ MES_TABLES = [
|
||||
'DW_MES_RESOURCE'
|
||||
]
|
||||
|
||||
|
||||
def get_table_schema(cursor, table_name):
|
||||
def get_table_schema(cursor, table_name, owner=None):
|
||||
"""获取表的结构信息"""
|
||||
query = """
|
||||
SELECT
|
||||
@@ -69,9 +70,13 @@ def get_table_schema(cursor, table_name):
|
||||
COLUMN_ID
|
||||
FROM ALL_TAB_COLUMNS
|
||||
WHERE TABLE_NAME = :table_name
|
||||
ORDER BY COLUMN_ID
|
||||
"""
|
||||
cursor.execute(query, table_name=table_name)
|
||||
if owner:
|
||||
query += " AND OWNER = :owner ORDER BY COLUMN_ID"
|
||||
cursor.execute(query, table_name=table_name, owner=owner)
|
||||
else:
|
||||
query += " ORDER BY COLUMN_ID"
|
||||
cursor.execute(query, table_name=table_name)
|
||||
columns = cursor.fetchall()
|
||||
|
||||
schema = []
|
||||
@@ -91,29 +96,39 @@ def get_table_schema(cursor, table_name):
|
||||
return schema
|
||||
|
||||
|
||||
def get_table_comments(cursor, table_name):
|
||||
def get_table_comments(cursor, table_name, owner=None):
|
||||
"""获取表和列的注释"""
|
||||
# 获取表注释
|
||||
cursor.execute("""
|
||||
table_query = """
|
||||
SELECT COMMENTS
|
||||
FROM ALL_TAB_COMMENTS
|
||||
WHERE TABLE_NAME = :table_name
|
||||
""", table_name=table_name)
|
||||
"""
|
||||
if owner:
|
||||
table_query += " AND OWNER = :owner"
|
||||
cursor.execute(table_query, table_name=table_name, owner=owner)
|
||||
else:
|
||||
cursor.execute(table_query, table_name=table_name)
|
||||
table_comment = cursor.fetchone()
|
||||
|
||||
# 获取列注释
|
||||
cursor.execute("""
|
||||
col_query = """
|
||||
SELECT COLUMN_NAME, COMMENTS
|
||||
FROM ALL_COL_COMMENTS
|
||||
WHERE TABLE_NAME = :table_name
|
||||
ORDER BY COLUMN_NAME
|
||||
""", table_name=table_name)
|
||||
"""
|
||||
if owner:
|
||||
col_query += " AND OWNER = :owner ORDER BY COLUMN_NAME"
|
||||
cursor.execute(col_query, table_name=table_name, owner=owner)
|
||||
else:
|
||||
col_query += " ORDER BY COLUMN_NAME"
|
||||
cursor.execute(col_query, table_name=table_name)
|
||||
column_comments = {row[0]: row[1] for row in cursor.fetchall()}
|
||||
|
||||
return table_comment[0] if table_comment else None, column_comments
|
||||
|
||||
|
||||
def get_table_indexes(cursor, table_name):
|
||||
def get_table_indexes(cursor, table_name, owner=None):
|
||||
"""获取表的索引信息"""
|
||||
query = """
|
||||
SELECT
|
||||
@@ -126,14 +141,24 @@ def get_table_indexes(cursor, table_name):
|
||||
GROUP BY i.INDEX_NAME, i.UNIQUENESS
|
||||
ORDER BY i.INDEX_NAME
|
||||
"""
|
||||
cursor.execute(query, table_name=table_name)
|
||||
if owner:
|
||||
query = query.replace(
|
||||
"WHERE i.TABLE_NAME = :table_name",
|
||||
"WHERE i.TABLE_NAME = :table_name AND i.TABLE_OWNER = :owner",
|
||||
)
|
||||
cursor.execute(query, table_name=table_name, owner=owner)
|
||||
else:
|
||||
cursor.execute(query, table_name=table_name)
|
||||
return cursor.fetchall()
|
||||
|
||||
|
||||
def get_sample_data(cursor, table_name, limit=5):
|
||||
def get_sample_data(cursor, table_name, owner=None, limit=5):
|
||||
"""获取表的示例数据"""
|
||||
try:
|
||||
cursor.execute(f"SELECT * FROM {table_name} WHERE ROWNUM <= {limit}")
|
||||
if owner:
|
||||
cursor.execute(f"SELECT * FROM {owner}.{table_name} WHERE ROWNUM <= {limit}")
|
||||
else:
|
||||
cursor.execute(f"SELECT * FROM {table_name} WHERE ROWNUM <= {limit}")
|
||||
columns = [col[0] for col in cursor.description]
|
||||
rows = cursor.fetchall()
|
||||
return columns, rows
|
||||
@@ -143,35 +168,67 @@ def get_sample_data(cursor, table_name, limit=5):
|
||||
|
||||
def main():
|
||||
"""主函数"""
|
||||
parser = argparse.ArgumentParser(description="Query Oracle table/view schema information")
|
||||
parser.add_argument(
|
||||
"--schema",
|
||||
help="Schema/owner to scan (e.g. DWH). If set, scans all TABLE/VIEW in that schema.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
help="Output JSON path (default: data/table_schema_info.json)",
|
||||
default=None,
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
print("Connecting to database...")
|
||||
connection = oracledb.connect(**DB_CONFIG)
|
||||
cursor = connection.cursor()
|
||||
|
||||
all_table_info = {}
|
||||
|
||||
print(f"\nQuerying schema information for {len(MES_TABLES)} tables...\n")
|
||||
owner = args.schema.strip().upper() if args.schema else None
|
||||
if owner:
|
||||
cursor.execute(
|
||||
"""
|
||||
SELECT OBJECT_NAME
|
||||
FROM ALL_OBJECTS
|
||||
WHERE OWNER = :owner
|
||||
AND OBJECT_TYPE IN ('TABLE', 'VIEW')
|
||||
ORDER BY OBJECT_NAME
|
||||
""",
|
||||
owner=owner,
|
||||
)
|
||||
table_list = [row[0] for row in cursor.fetchall()]
|
||||
else:
|
||||
table_list = MES_TABLES
|
||||
|
||||
for idx, table_name in enumerate(MES_TABLES, 1):
|
||||
print(f"[{idx}/{len(MES_TABLES)}] Processing {table_name}...")
|
||||
print(f"\nQuerying schema information for {len(table_list)} objects...\n")
|
||||
|
||||
for idx, table_name in enumerate(table_list, 1):
|
||||
print(f"[{idx}/{len(table_list)}] Processing {table_name}...")
|
||||
|
||||
try:
|
||||
# 获取表结构
|
||||
schema = get_table_schema(cursor, table_name)
|
||||
schema = get_table_schema(cursor, table_name, owner=owner)
|
||||
|
||||
# 获取注释
|
||||
table_comment, column_comments = get_table_comments(cursor, table_name)
|
||||
table_comment, column_comments = get_table_comments(cursor, table_name, owner=owner)
|
||||
|
||||
# 获取索引
|
||||
indexes = get_table_indexes(cursor, table_name)
|
||||
indexes = get_table_indexes(cursor, table_name, owner=owner)
|
||||
|
||||
# 获取行数
|
||||
cursor.execute(f"SELECT COUNT(*) FROM {table_name}")
|
||||
if owner:
|
||||
cursor.execute(f"SELECT COUNT(*) FROM {owner}.{table_name}")
|
||||
else:
|
||||
cursor.execute(f"SELECT COUNT(*) FROM {table_name}")
|
||||
row_count = cursor.fetchone()[0]
|
||||
|
||||
# 获取示例数据
|
||||
sample_columns, sample_data = get_sample_data(cursor, table_name, limit=3)
|
||||
sample_columns, sample_data = get_sample_data(cursor, table_name, owner=owner, limit=3)
|
||||
|
||||
all_table_info[table_name] = {
|
||||
'owner': owner,
|
||||
'table_comment': table_comment,
|
||||
'row_count': row_count,
|
||||
'schema': schema,
|
||||
@@ -186,7 +243,10 @@ def main():
|
||||
all_table_info[table_name] = {'error': str(e)}
|
||||
|
||||
# 保存到 JSON 文件
|
||||
output_file = Path(__file__).resolve().parent.parent / 'data' / 'table_schema_info.json'
|
||||
if args.output:
|
||||
output_file = Path(args.output)
|
||||
else:
|
||||
output_file = Path(__file__).resolve().parent.parent / 'data' / 'table_schema_info.json'
|
||||
with open(output_file, 'w', encoding='utf-8') as f:
|
||||
json.dump(all_table_info, f, ensure_ascii=False, indent=2, default=str)
|
||||
|
||||
@@ -199,4 +259,3 @@ def main():
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
|
||||
194
tools/update_oracle_authorized_objects.py
Normal file
194
tools/update_oracle_authorized_objects.py
Normal file
@@ -0,0 +1,194 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Generate a list of accessible TABLE/VIEW objects under a specific owner (default: DWH)
|
||||
and update docs/Oracle_Authorized_Objects.md.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
from collections import Counter
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
import oracledb
|
||||
|
||||
|
||||
def load_env() -> None:
|
||||
"""Load .env if available (best-effort)."""
|
||||
try:
|
||||
from dotenv import load_dotenv # type: ignore
|
||||
|
||||
env_path = Path(__file__).resolve().parent.parent / ".env"
|
||||
load_dotenv(env_path)
|
||||
return
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
env_path = Path(__file__).resolve().parent.parent / ".env"
|
||||
if not env_path.exists():
|
||||
return
|
||||
for line in env_path.read_text().splitlines():
|
||||
if not line or line.strip().startswith("#") or "=" not in line:
|
||||
continue
|
||||
key, value = line.split("=", 1)
|
||||
os.environ.setdefault(key.strip(), value.strip())
|
||||
|
||||
|
||||
def get_connection():
|
||||
host = os.getenv("DB_HOST", "10.1.1.58")
|
||||
port = os.getenv("DB_PORT", "1521")
|
||||
service = os.getenv("DB_SERVICE", "DWDB")
|
||||
user = os.getenv("DB_USER", "")
|
||||
password = os.getenv("DB_PASSWORD", "")
|
||||
dsn = (
|
||||
"(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)"
|
||||
f"(HOST={host})(PORT={port})))(CONNECT_DATA=(SERVICE_NAME={service})))"
|
||||
)
|
||||
return oracledb.connect(user=user, password=password, dsn=dsn)
|
||||
|
||||
|
||||
def main() -> int:
|
||||
owner = "DWH"
|
||||
output_path = Path("docs/Oracle_Authorized_Objects.md")
|
||||
if len(sys.argv) > 1:
|
||||
owner = sys.argv[1].strip().upper()
|
||||
|
||||
load_env()
|
||||
conn = get_connection()
|
||||
cur = conn.cursor()
|
||||
|
||||
cur.execute("SELECT USER FROM DUAL")
|
||||
user = cur.fetchone()[0]
|
||||
|
||||
# Roles
|
||||
cur.execute("SELECT GRANTED_ROLE FROM USER_ROLE_PRIVS")
|
||||
roles = [r[0] for r in cur.fetchall()]
|
||||
|
||||
# Accessible objects under owner
|
||||
cur.execute(
|
||||
"""
|
||||
SELECT OBJECT_NAME, OBJECT_TYPE
|
||||
FROM ALL_OBJECTS
|
||||
WHERE OWNER = :p_owner
|
||||
AND OBJECT_TYPE IN ('TABLE', 'VIEW')
|
||||
ORDER BY OBJECT_NAME
|
||||
""",
|
||||
p_owner=owner,
|
||||
)
|
||||
objects = cur.fetchall()
|
||||
|
||||
# Direct + PUBLIC grants
|
||||
cur.execute(
|
||||
"""
|
||||
SELECT o.object_name, o.object_type, p.privilege,
|
||||
CASE WHEN p.grantee = 'PUBLIC' THEN 'PUBLIC' ELSE 'DIRECT' END AS source
|
||||
FROM all_tab_privs p
|
||||
JOIN all_objects o
|
||||
ON o.owner = p.table_schema
|
||||
AND o.object_name = p.table_name
|
||||
WHERE p.grantee IN (:p_user, 'PUBLIC')
|
||||
AND o.owner = :p_owner
|
||||
AND o.object_type IN ('TABLE', 'VIEW')
|
||||
""",
|
||||
p_user=user,
|
||||
p_owner=owner,
|
||||
)
|
||||
direct_rows = cur.fetchall()
|
||||
|
||||
# Role grants
|
||||
role_rows = []
|
||||
for role in roles:
|
||||
cur.execute(
|
||||
"""
|
||||
SELECT o.object_name, o.object_type, p.privilege, p.role AS source
|
||||
FROM role_tab_privs p
|
||||
JOIN all_objects o
|
||||
ON o.owner = p.owner
|
||||
AND o.object_name = p.table_name
|
||||
WHERE p.role = :p_role
|
||||
AND o.owner = :p_owner
|
||||
AND o.object_type IN ('TABLE', 'VIEW')
|
||||
""",
|
||||
p_role=role,
|
||||
p_owner=owner,
|
||||
)
|
||||
role_rows.extend(cur.fetchall())
|
||||
|
||||
# Aggregate privileges by object
|
||||
info = {}
|
||||
for name, otype in objects:
|
||||
info[(name, otype)] = {"privs": set(), "sources": set()}
|
||||
|
||||
for name, otype, priv, source in direct_rows + role_rows:
|
||||
key = (name, otype)
|
||||
if key not in info:
|
||||
info[key] = {"privs": set(), "sources": set()}
|
||||
info[key]["privs"].add(priv)
|
||||
info[key]["sources"].add(source)
|
||||
|
||||
# Fill in missing privilege/source if object is visible but not in grants
|
||||
for key, data in info.items():
|
||||
if not data["privs"]:
|
||||
data["privs"].add("UNKNOWN")
|
||||
if not data["sources"]:
|
||||
data["sources"].add("SYSTEM")
|
||||
|
||||
type_counts = Counter(k[1] for k in info.keys())
|
||||
source_counts = Counter()
|
||||
for data in info.values():
|
||||
for s in data["sources"]:
|
||||
if s in ("DIRECT", "PUBLIC"):
|
||||
source_counts[s] += 1
|
||||
elif s == "SYSTEM":
|
||||
source_counts["SYSTEM"] += 1
|
||||
else:
|
||||
source_counts["ROLE"] += 1
|
||||
|
||||
# Render markdown
|
||||
lines = []
|
||||
lines.append("# Oracle 可使用 TABLE/VIEW 清單(DWH)")
|
||||
lines.append("")
|
||||
lines.append(f"**產生時間**: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
|
||||
lines.append(f"**使用者**: {user}")
|
||||
lines.append(f"**Schema**: {owner}")
|
||||
lines.append("")
|
||||
lines.append("## 摘要")
|
||||
lines.append("")
|
||||
lines.append(f"- 可使用物件總數: {len(info):,}")
|
||||
lines.append(f"- TABLE: {type_counts.get('TABLE', 0):,}")
|
||||
lines.append(f"- VIEW: {type_counts.get('VIEW', 0):,}")
|
||||
lines.append(
|
||||
"- 來源 (去重後物件數): "
|
||||
f"DIRECT {source_counts.get('DIRECT', 0):,}, "
|
||||
f"PUBLIC {source_counts.get('PUBLIC', 0):,}, "
|
||||
f"ROLE {source_counts.get('ROLE', 0):,}, "
|
||||
f"SYSTEM {source_counts.get('SYSTEM', 0):,}"
|
||||
)
|
||||
lines.append("")
|
||||
lines.append("## 物件清單")
|
||||
lines.append("")
|
||||
lines.append("| 物件 | 類型 | 權限 | 授權來源 |")
|
||||
lines.append("|------|------|------|----------|")
|
||||
|
||||
for name, otype in sorted(info.keys()):
|
||||
data = info[(name, otype)]
|
||||
obj = f"{owner}.{name}"
|
||||
privs = ", ".join(sorted(data["privs"]))
|
||||
sources = ", ".join(
|
||||
sorted(
|
||||
"ROLE" if s not in ("DIRECT", "PUBLIC", "SYSTEM") else s
|
||||
for s in data["sources"]
|
||||
)
|
||||
)
|
||||
lines.append(f"| `{obj}` | {otype} | {privs} | {sources} |")
|
||||
|
||||
output_path.write_text("\n".join(lines), encoding="utf-8")
|
||||
|
||||
cur.close()
|
||||
conn.close()
|
||||
print(f"Wrote {output_path} ({len(info)} objects)")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
Reference in New Issue
Block a user