diff --git a/README.md b/README.md
index ec3c5e0..666e831 100644
--- a/README.md
+++ b/README.md
@@ -40,11 +40,13 @@
| WIP 三頁 Vue 3 遷移(Overview/Detail/Hold Detail) | ✅ 已完成 |
| 設備雙頁 Vue 3 遷移(Status/History) | ✅ 已完成 |
| 設備快取 DataFrame TTL 一致性修復 | ✅ 已完成 |
+| 中段製程不良追溯分析(TMTT → 上游) | ✅ 已完成 |
---
## 開發歷史(Vite 重構後)
+- 2026-02-10:完成中段製程不良追溯分析(`/mid-section-defect`)— TMTT 測試站不良回溯至上游機台/站點/製程。三段式資料管線(TMTT 偵測 → SPLITFROMID BFS + COMBINEDASSYLOTS 合批展開 → 上游製程歷史),支援 205 種不良原因篩選、6 張 Pareto 圖表、日趨勢、LOT 明細分頁(200 筆/頁)。Loss reasons 24h Redis 快取、分析結果 5 分鐘快取、Detail API 分離(summary ~16KB + detail ~110KB/page,原 72MB 單次回應)。
- 2026-02-09:完成設備雙頁 Vue 3 遷移(`/resource`、`/resource-history`)— 兩頁共 1,697 行 vanilla JS + 3,200 行 Jinja2 模板重寫為 Vue 3 SFC。抽取 `resource-shared/` 共用模組(CSS 基底、E10 狀態常數、HierarchyTable 三層展開樹表元件),History 頁 4 個 ECharts 圖表改用 vue-echarts,Status 頁複用 `useAutoRefresh` composable(5 分鐘自動刷新)。
- 2026-02-09:完成 WIP 三頁 Vue 3 遷移(`/wip-overview`、`/wip-detail`、`/hold-detail`)— 三頁共 1,941 行 vanilla JS + Jinja2 重寫為 Vue 3 SFC。抽取共用 CSS/常數/元件至 `wip-shared/`,Pareto 圖改用 vue-echarts(與 QC-GATE 一致),Hold Detail 新增前端 URL params 判斷取代 Jinja2 注入。
- 2026-02-09:完成數據表查詢頁面(`/tables`)Vue 3 遷移 — 第二個純 Vite 頁面,建立 `apiPost` POST 請求模式,237 行 vanilla JS 重寫為 Vue 3 SFC 元件。
@@ -460,7 +462,7 @@ A: 請確認瀏覽器允許下載檔案,並檢查查詢結果是否有資料
透過側邊欄抽屜分組導覽切換各功能模組:
- **報表類**:WIP 即時概況、設備即時概況、設備歷史績效、QC-GATE 即時狀態
-- **查詢類**:設備維修查詢、批次追蹤工具、TMTT 不良分析
+- **查詢類**:設備維修查詢、批次追蹤工具、TMTT 不良分析、中段製程不良追溯
- **開發工具**(admin only):數據表查詢、Excel 批次查詢、頁面管理、效能監控
- 抽屜/頁面配置可由管理員動態管理(新增、重排、刪除)
@@ -523,6 +525,21 @@ A: 請確認瀏覽器允許下載檔案,並檢查查詢結果是否有資料
- 站點排序依 DW_MES_SPEC_WORKCENTER_V 製程順序
- **技術架構**:第一個純 Vue 3 + Vite 頁面,完全脫離 Jinja2
+### 中段製程不良追溯分析
+
+- TMTT 測試站不良回溯至上游機台 / 站點 / 製程的反向追蹤分析
+- 三段式資料管線:
+ 1. TMTT 偵測(LOTWIPHISTORY + LOTREJECTHISTORY)
+ 2. LOT 族譜解析(CONTAINER.SPLITFROMID BFS 分批鏈 + PJ_COMBINEDASSYLOTS 合批展開)
+ 3. 上游製程歷史(LOTWIPHISTORY by ancestor CIDs)
+- 6 張 KPI 卡片 + 6 張 Pareto 圖表(依站點/不良原因/上游機台/TMTT 機台/製程/封裝歸因)
+- 日趨勢折線圖 + LOT 明細分頁表(200 筆/頁,伺服器端 DEFECT_RATE 降序排序)
+- 不良原因多選篩選(205 種,24h Redis 快取)
+- 分析結果 5 分鐘 Redis 快取;summary API (~16 KB) 與 detail API (~110 KB/page) 分離
+- CSV 串流匯出(UTF-8 BOM,完整明細)
+- 5 分鐘自動刷新 + visibilitychange 即時刷新
+- **技術架構**:Vue 3 + Vite,Pareto/趨勢圖使用 vue-echarts,複用 `wip-shared/` 的 Pagination/useAutoRefresh
+
### 數據表查詢工具
- 顯示所有 DWH 表格的分類卡片目錄(即時數據表/現況快照表/歷史累積表/輔助表)
@@ -588,8 +605,8 @@ CIRCUIT_BREAKER_RECOVERY_TIMEOUT=30
| 技術 | 用途 |
|------|------|
| Jinja2 | 模板引擎(既有頁面) |
-| Vue 3 | UI 框架(QC-GATE、Tables、WIP 三頁、設備雙頁已遷移,漸進式擴展中) |
-| vue-echarts | ECharts Vue 封裝(QC-GATE、WIP Overview Pareto、Resource History 4 圖表) |
+| Vue 3 | UI 框架(QC-GATE、Tables、WIP 三頁、設備雙頁、中段不良追溯已遷移,漸進式擴展中) |
+| vue-echarts | ECharts Vue 封裝(QC-GATE、WIP Overview Pareto、Resource History 4 圖表、Mid-Section Defect 7 圖表) |
| Vite 6 | 前端多頁模組打包(含 Vue SFC + HTML entry) |
| ECharts | 圖表庫(npm tree-shaking + 舊版靜態檔案並存) |
| Vanilla JS Modules | 互動功能與頁面邏輯(既有頁面) |
@@ -640,7 +657,8 @@ DashBoard_vite/
│ │ ├── dashboard/ # 儀表板查詢
│ │ ├── resource/ # 設備查詢
│ │ ├── wip/ # WIP 查詢
-│ │ └── resource_history/ # 設備歷史查詢
+│ │ ├── resource_history/ # 設備歷史查詢
+│ │ └── mid_section_defect/ # 中段不良追溯查詢
│ └── templates/ # HTML 模板
├── frontend/ # Vite 前端專案
│ ├── src/core/ # 共用 API/欄位契約/計算 helper
@@ -655,7 +673,8 @@ DashBoard_vite/
│ ├── src/wip-shared/ # WIP 三頁共用 CSS/常數/元件
│ ├── src/wip-overview/ # WIP 即時概況 (Vue 3 SFC)
│ ├── src/wip-detail/ # WIP 明細查詢 (Vue 3 SFC)
-│ └── src/hold-detail/ # Hold 狀態分析 (Vue 3 SFC)
+│ ├── src/hold-detail/ # Hold 狀態分析 (Vue 3 SFC)
+│ └── src/mid-section-defect/ # 中段不良追溯分析 (Vue 3 SFC)
├── shared/
│ └── field_contracts.json # 前後端共用欄位契約
├── scripts/ # 腳本
@@ -739,6 +758,22 @@ conda run -n mes-dashboard python scripts/run_cache_benchmarks.py --enforce
## 變更日誌
+### 2026-02-10
+
+- 新增中段製程不良追溯分析頁面(`/mid-section-defect`):
+ - TMTT 測試站偵測到的不良,反向追蹤至上游機台/站點/製程的歸因分析
+ - 三段式資料管線:TMTT 偵測 → LOT 族譜解析(SPLITFROMID BFS 分批鏈 + COMBINEDASSYLOTS 合批展開)→ 上游製程歷史
+ - 6 張 KPI 卡片(投入/LOT數/不良數/不良率/首要原因/影響機台數)
+ - 6 張 Pareto 圖表(站點/不良原因/上游機台/TMTT 機台/製程/封裝歸因)+ 日趨勢折線
+ - 不良原因多選篩選(205 種,全站 24h Redis 快取,`/api/mid-section-defect/loss-reasons`)
+ - Detail API 分頁分離(`/api/mid-section-defect/analysis` summary ~16 KB + `/api/mid-section-defect/analysis/detail` ~110 KB/page),原 72 MB 單次回應
+ - 伺服器端 DEFECT_RATE 降序排序 + 前端頁內欄位排序
+ - CSV 串流匯出(UTF-8 BOM,完整明細)
+ - 進入頁面不自動查詢,點擊「查詢」後才執行;首次查詢後啟動 5 分鐘自動刷新
+ - Summary + Detail page 1 平行載入(`Promise.all`)
+ - NaN 安全防護(Oracle NULL → pandas NaN,`isinstance(val, str)` 過濾)
+ - 技術架構:Vue 3 + Vite,vue-echarts 7 圖表,複用 `wip-shared/` Pagination/useAutoRefresh
+
### 2026-02-09
- 完成設備雙頁 Vue 3 遷移(`/resource`、`/resource-history`):
@@ -879,5 +914,5 @@ conda run -n mes-dashboard python scripts/run_cache_benchmarks.py --enforce
---
-**文檔版本**: 5.3
-**最後更新**: 2026-02-09
+**文檔版本**: 5.4
+**最後更新**: 2026-02-10
diff --git a/data/page_status.json b/data/page_status.json
index 646a03b..3cdbcd5 100644
--- a/data/page_status.json
+++ b/data/page_status.json
@@ -78,6 +78,13 @@
"drawer_id": "queries",
"order": 5
},
+ {
+ "route": "/mid-section-defect",
+ "name": "中段製程不良追溯",
+ "status": "dev",
+ "drawer_id": "queries",
+ "order": 6
+ },
{
"route": "/admin/pages",
"name": "頁面管理",
diff --git a/frontend/package.json b/frontend/package.json
index 6484262..41d2356 100644
--- a/frontend/package.json
+++ b/frontend/package.json
@@ -5,7 +5,7 @@
"type": "module",
"scripts": {
"dev": "vite --host",
- "build": "vite build && cp ../src/mes_dashboard/static/dist/src/tables/index.html ../src/mes_dashboard/static/dist/tables.html && cp ../src/mes_dashboard/static/dist/src/qc-gate/index.html ../src/mes_dashboard/static/dist/qc-gate.html && cp ../src/mes_dashboard/static/dist/src/wip-overview/index.html ../src/mes_dashboard/static/dist/wip-overview.html && cp ../src/mes_dashboard/static/dist/src/wip-detail/index.html ../src/mes_dashboard/static/dist/wip-detail.html && cp ../src/mes_dashboard/static/dist/src/hold-detail/index.html ../src/mes_dashboard/static/dist/hold-detail.html && cp ../src/mes_dashboard/static/dist/src/resource-status/index.html ../src/mes_dashboard/static/dist/resource-status.html && cp ../src/mes_dashboard/static/dist/src/resource-history/index.html ../src/mes_dashboard/static/dist/resource-history.html",
+ "build": "vite build && cp ../src/mes_dashboard/static/dist/src/tables/index.html ../src/mes_dashboard/static/dist/tables.html && cp ../src/mes_dashboard/static/dist/src/qc-gate/index.html ../src/mes_dashboard/static/dist/qc-gate.html && cp ../src/mes_dashboard/static/dist/src/wip-overview/index.html ../src/mes_dashboard/static/dist/wip-overview.html && cp ../src/mes_dashboard/static/dist/src/wip-detail/index.html ../src/mes_dashboard/static/dist/wip-detail.html && cp ../src/mes_dashboard/static/dist/src/hold-detail/index.html ../src/mes_dashboard/static/dist/hold-detail.html && cp ../src/mes_dashboard/static/dist/src/resource-status/index.html ../src/mes_dashboard/static/dist/resource-status.html && cp ../src/mes_dashboard/static/dist/src/resource-history/index.html ../src/mes_dashboard/static/dist/resource-history.html && cp ../src/mes_dashboard/static/dist/src/mid-section-defect/index.html ../src/mes_dashboard/static/dist/mid-section-defect.html",
"test": "node --test tests/*.test.js"
},
"devDependencies": {
diff --git a/frontend/src/mid-section-defect/App.vue b/frontend/src/mid-section-defect/App.vue
new file mode 100644
index 0000000..becd8f9
--- /dev/null
+++ b/frontend/src/mid-section-defect/App.vue
@@ -0,0 +1,263 @@
+
+
+
+
+
+
+
+
+
{{ queryError }}
+
+
+
+ 追溯分析未完成(genealogy 查詢失敗),圖表僅顯示 TMTT 站點數據。
+
+
+
+
+
+
+
+
+
+
+
請選擇日期範圍與不良原因,點擊「查詢」開始分析。
+
+
+
+
+
diff --git a/frontend/src/mid-section-defect/components/DetailTable.vue b/frontend/src/mid-section-defect/components/DetailTable.vue
new file mode 100644
index 0000000..268a266
--- /dev/null
+++ b/frontend/src/mid-section-defect/components/DetailTable.vue
@@ -0,0 +1,139 @@
+
+
+
+
+
+
+
+
+
+
+
+ |
+ {{ col.label }}{{ sortIcon(col.key) }}
+ |
+
+
+
+
+ |
+ {{ formatCell(row[col.key], col) }}
+ |
+
+
+ | 暫無資料 |
+
+
+
+
+
+
+
+
+
diff --git a/frontend/src/mid-section-defect/components/FilterBar.vue b/frontend/src/mid-section-defect/components/FilterBar.vue
new file mode 100644
index 0000000..719bf28
--- /dev/null
+++ b/frontend/src/mid-section-defect/components/FilterBar.vue
@@ -0,0 +1,77 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/frontend/src/mid-section-defect/components/KpiCards.vue b/frontend/src/mid-section-defect/components/KpiCards.vue
new file mode 100644
index 0000000..a813a49
--- /dev/null
+++ b/frontend/src/mid-section-defect/components/KpiCards.vue
@@ -0,0 +1,83 @@
+
+
+
+
+
+
+
{{ card.label }}
+
+ {{ card.value }}
+ {{ card.unit }}
+
+
+
+
+
diff --git a/frontend/src/mid-section-defect/components/MultiSelect.vue b/frontend/src/mid-section-defect/components/MultiSelect.vue
new file mode 100644
index 0000000..e89b23f
--- /dev/null
+++ b/frontend/src/mid-section-defect/components/MultiSelect.vue
@@ -0,0 +1,154 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/frontend/src/mid-section-defect/components/ParetoChart.vue b/frontend/src/mid-section-defect/components/ParetoChart.vue
new file mode 100644
index 0000000..ec823ad
--- /dev/null
+++ b/frontend/src/mid-section-defect/components/ParetoChart.vue
@@ -0,0 +1,132 @@
+
+
+
+
+
diff --git a/frontend/src/mid-section-defect/components/TrendChart.vue b/frontend/src/mid-section-defect/components/TrendChart.vue
new file mode 100644
index 0000000..fca31f9
--- /dev/null
+++ b/frontend/src/mid-section-defect/components/TrendChart.vue
@@ -0,0 +1,111 @@
+
+
+
+
+
diff --git a/frontend/src/mid-section-defect/index.html b/frontend/src/mid-section-defect/index.html
new file mode 100644
index 0000000..b69f4c1
--- /dev/null
+++ b/frontend/src/mid-section-defect/index.html
@@ -0,0 +1,12 @@
+
+
+
+
+
+ 中段製程不良追溯分析
+
+
+
+
+
+
diff --git a/frontend/src/mid-section-defect/main.js b/frontend/src/mid-section-defect/main.js
new file mode 100644
index 0000000..c56440b
--- /dev/null
+++ b/frontend/src/mid-section-defect/main.js
@@ -0,0 +1,6 @@
+import { createApp } from 'vue';
+
+import App from './App.vue';
+import './style.css';
+
+createApp(App).mount('#app');
diff --git a/frontend/src/mid-section-defect/style.css b/frontend/src/mid-section-defect/style.css
new file mode 100644
index 0000000..b1bbff0
--- /dev/null
+++ b/frontend/src/mid-section-defect/style.css
@@ -0,0 +1,498 @@
+:root {
+ --msd-bg: #f5f7fb;
+ --msd-card-bg: #ffffff;
+ --msd-text: #1f2937;
+ --msd-muted: #64748b;
+ --msd-border: #dbe4ef;
+ --msd-shadow: 0 1px 3px rgba(15, 23, 42, 0.08);
+ --msd-shadow-md: 0 8px 22px rgba(15, 23, 42, 0.1);
+ --msd-primary: #6366f1;
+ --msd-primary-dark: #4f46e5;
+}
+
+* {
+ box-sizing: border-box;
+}
+
+body {
+ margin: 0;
+ font-family: 'Microsoft JhengHei', 'PingFang TC', 'Noto Sans TC', sans-serif;
+ background: var(--msd-bg);
+ color: var(--msd-text);
+}
+
+.page-container {
+ min-height: 100vh;
+ padding: 16px;
+ max-width: 1800px;
+ margin: 0 auto;
+}
+
+/* ====== Header ====== */
+.page-header {
+ display: flex;
+ flex-direction: column;
+ gap: 4px;
+ padding: 16px 20px;
+ border-radius: 12px;
+ margin-bottom: 16px;
+ background: linear-gradient(135deg, #6366f1 0%, #8b5cf6 50%, #a855f7 100%);
+ box-shadow: var(--msd-shadow-md);
+}
+
+.page-header h1 {
+ margin: 0;
+ color: #ffffff;
+ font-size: 24px;
+ letter-spacing: 0.2px;
+}
+
+.header-desc {
+ margin: 0;
+ color: rgba(255, 255, 255, 0.8);
+ font-size: 13px;
+}
+
+/* ====== Section Card ====== */
+.section-card {
+ background: var(--msd-card-bg);
+ border-radius: 12px;
+ box-shadow: var(--msd-shadow);
+ margin-bottom: 16px;
+}
+
+.section-inner {
+ padding: 16px 20px;
+}
+
+.section-title {
+ margin: 0 0 12px;
+ font-size: 16px;
+ font-weight: 700;
+}
+
+/* ====== Filter Bar ====== */
+.filter-row {
+ display: flex;
+ flex-wrap: wrap;
+ gap: 12px;
+ align-items: center;
+}
+
+.filter-field {
+ display: flex;
+ align-items: center;
+ gap: 8px;
+}
+
+.filter-field label {
+ font-size: 13px;
+ color: #475569;
+ white-space: nowrap;
+}
+
+.filter-field input[type='date'] {
+ border: 1px solid var(--msd-border);
+ border-radius: 6px;
+ padding: 8px 10px;
+ font-size: 13px;
+ background: #ffffff;
+ color: #1f2937;
+}
+
+/* ====== Buttons ====== */
+.btn {
+ border: none;
+ border-radius: 8px;
+ padding: 8px 20px;
+ font-size: 14px;
+ font-weight: 600;
+ cursor: pointer;
+ transition: background 0.2s;
+}
+
+.btn-primary {
+ background: var(--msd-primary);
+ color: #ffffff;
+}
+
+.btn-primary:hover:not(:disabled) {
+ background: var(--msd-primary-dark);
+}
+
+.btn:disabled {
+ cursor: not-allowed;
+ opacity: 0.6;
+}
+
+.btn-sm {
+ border: 1px solid var(--msd-border);
+ border-radius: 6px;
+ background: #ffffff;
+ color: #475569;
+ font-size: 12px;
+ padding: 4px 10px;
+ cursor: pointer;
+}
+
+.btn-sm:hover:not(:disabled) {
+ background: #f1f5f9;
+}
+
+/* ====== MultiSelect ====== */
+.multi-select {
+ position: relative;
+ min-width: 200px;
+}
+
+.multi-select-trigger {
+ width: 100%;
+ display: flex;
+ align-items: center;
+ justify-content: space-between;
+ gap: 8px;
+ border: 1px solid var(--msd-border);
+ border-radius: 6px;
+ padding: 8px 10px;
+ font-size: 13px;
+ color: #1f2937;
+ background: #ffffff;
+ cursor: pointer;
+}
+
+.multi-select-trigger:disabled {
+ cursor: not-allowed;
+ opacity: 0.7;
+}
+
+.multi-select-text {
+ overflow: hidden;
+ text-overflow: ellipsis;
+ white-space: nowrap;
+}
+
+.multi-select-arrow {
+ flex-shrink: 0;
+ font-size: 10px;
+ color: #94a3b8;
+}
+
+.multi-select-dropdown {
+ position: absolute;
+ top: 100%;
+ left: 0;
+ right: 0;
+ margin-top: 4px;
+ background: #ffffff;
+ border: 1px solid var(--msd-border);
+ border-radius: 8px;
+ box-shadow: var(--msd-shadow-md);
+ z-index: 100;
+ max-height: 300px;
+ display: flex;
+ flex-direction: column;
+}
+
+.multi-select-options {
+ overflow-y: auto;
+ flex: 1;
+ padding: 4px 0;
+}
+
+.multi-select-option {
+ display: flex;
+ align-items: center;
+ gap: 8px;
+ width: 100%;
+ padding: 6px 12px;
+ border: none;
+ background: transparent;
+ cursor: pointer;
+ font-size: 13px;
+ text-align: left;
+}
+
+.multi-select-option:hover {
+ background: #f1f5f9;
+}
+
+.multi-select-option input[type='checkbox'] {
+ pointer-events: none;
+}
+
+.multi-select-actions {
+ display: flex;
+ gap: 6px;
+ padding: 8px 12px;
+ border-top: 1px solid var(--msd-border);
+ background: #f8fafc;
+ border-radius: 0 0 8px 8px;
+}
+
+/* ====== Error / Warning Banners ====== */
+.error-banner {
+ padding: 12px 16px;
+ background: #fef2f2;
+ border: 1px solid #fecaca;
+ border-radius: 8px;
+ color: #dc2626;
+ font-size: 14px;
+ margin-bottom: 16px;
+}
+
+.warning-banner {
+ padding: 12px 16px;
+ background: #fffbeb;
+ border: 1px solid #fde68a;
+ border-radius: 8px;
+ color: #b45309;
+ font-size: 14px;
+ margin-bottom: 16px;
+}
+
+/* ====== KPI Cards ====== */
+.kpi-section {
+ margin-bottom: 16px;
+}
+
+.kpi-grid {
+ display: grid;
+ grid-template-columns: repeat(auto-fit, minmax(180px, 1fr));
+ gap: 12px;
+}
+
+.kpi-card {
+ background: var(--msd-card-bg);
+ border-radius: 10px;
+ box-shadow: var(--msd-shadow);
+ padding: 16px;
+ border-top: 3px solid #6366f1;
+}
+
+.kpi-label {
+ font-size: 12px;
+ color: var(--msd-muted);
+ margin-bottom: 6px;
+}
+
+.kpi-value {
+ font-size: 24px;
+ font-weight: 700;
+ display: flex;
+ align-items: baseline;
+ gap: 4px;
+}
+
+.kpi-value.kpi-text {
+ font-size: 14px;
+ font-weight: 600;
+ word-break: break-all;
+}
+
+.kpi-unit {
+ font-size: 12px;
+ font-weight: 400;
+ color: var(--msd-muted);
+}
+
+/* ====== Charts ====== */
+.charts-section {
+ margin-bottom: 16px;
+}
+
+.charts-row {
+ display: grid;
+ grid-template-columns: 1fr 1fr;
+ gap: 12px;
+ margin-bottom: 12px;
+}
+
+.charts-row-full {
+ grid-template-columns: 1fr;
+}
+
+.chart-card {
+ background: var(--msd-card-bg);
+ border-radius: 10px;
+ box-shadow: var(--msd-shadow);
+ padding: 16px;
+}
+
+.chart-card-full {
+ grid-column: 1 / -1;
+}
+
+.chart-title {
+ margin: 0 0 8px;
+ font-size: 14px;
+ font-weight: 600;
+ color: var(--msd-text);
+}
+
+.chart-canvas {
+ width: 100%;
+ height: 320px;
+}
+
+.chart-canvas-wide {
+ height: 280px;
+}
+
+.chart-empty {
+ display: flex;
+ align-items: center;
+ justify-content: center;
+ height: 200px;
+ color: var(--msd-muted);
+ font-size: 14px;
+}
+
+/* ====== Detail Table ====== */
+.detail-header {
+ display: flex;
+ justify-content: space-between;
+ align-items: center;
+ margin-bottom: 12px;
+}
+
+.detail-actions {
+ display: flex;
+ align-items: center;
+ gap: 12px;
+}
+
+.detail-count {
+ font-size: 13px;
+ color: var(--msd-muted);
+}
+
+.table-wrapper {
+ overflow-x: auto;
+ border: 1px solid var(--msd-border);
+ border-radius: 8px;
+}
+
+.detail-table {
+ width: 100%;
+ border-collapse: collapse;
+ font-size: 13px;
+}
+
+.detail-table th {
+ background: #f1f5f9;
+ padding: 10px 12px;
+ text-align: left;
+ font-weight: 600;
+ font-size: 12px;
+ color: #475569;
+ white-space: nowrap;
+ border-bottom: 2px solid var(--msd-border);
+ position: sticky;
+ top: 0;
+}
+
+.detail-table th.sortable {
+ cursor: pointer;
+ user-select: none;
+}
+
+.detail-table th.sortable:hover {
+ background: #e2e8f0;
+}
+
+.detail-table td {
+ padding: 8px 12px;
+ border-bottom: 1px solid #f1f5f9;
+ white-space: nowrap;
+}
+
+.detail-table td.numeric {
+ text-align: right;
+ font-variant-numeric: tabular-nums;
+}
+
+.detail-table tbody tr:hover {
+ background: #f8fafc;
+}
+
+.empty-row {
+ text-align: center;
+ padding: 32px;
+ color: var(--msd-muted);
+}
+
+/* ====== Pagination ====== */
+.pagination {
+ display: flex;
+ justify-content: center;
+ align-items: center;
+ gap: 12px;
+ padding: 16px;
+ border-top: 1px solid var(--msd-border);
+}
+
+.pagination button {
+ padding: 8px 16px;
+ border: 1px solid var(--msd-border);
+ background: #fff;
+ border-radius: 6px;
+ cursor: pointer;
+ font-size: 13px;
+}
+
+.pagination button:hover:not(:disabled) {
+ border-color: var(--msd-primary);
+ color: var(--msd-primary);
+}
+
+.pagination button:disabled {
+ opacity: 0.5;
+ cursor: not-allowed;
+}
+
+.pagination .page-info {
+ font-size: 13px;
+ color: var(--msd-muted);
+}
+
+/* ====== Empty State ====== */
+.empty-state {
+ display: flex;
+ align-items: center;
+ justify-content: center;
+ padding: 80px 20px;
+ color: var(--msd-muted);
+ font-size: 15px;
+}
+
+/* ====== Loading Overlay ====== */
+.loading-overlay {
+ position: fixed;
+ inset: 0;
+ display: flex;
+ align-items: center;
+ justify-content: center;
+ background: rgba(255, 255, 255, 0.7);
+ z-index: 999;
+ transition: opacity 0.2s;
+}
+
+.loading-overlay.hidden {
+ opacity: 0;
+ pointer-events: none;
+}
+
+.loading-spinner {
+ width: 40px;
+ height: 40px;
+ border: 3px solid #e2e8f0;
+ border-top-color: var(--msd-primary);
+ border-radius: 50%;
+ animation: spin 0.8s linear infinite;
+}
+
+@keyframes spin {
+ to {
+ transform: rotate(360deg);
+ }
+}
diff --git a/frontend/vite.config.js b/frontend/vite.config.js
index cd82046..e97f7e1 100644
--- a/frontend/vite.config.js
+++ b/frontend/vite.config.js
@@ -23,7 +23,8 @@ export default defineConfig(({ mode }) => ({
tables: resolve(__dirname, 'src/tables/index.html'),
'query-tool': resolve(__dirname, 'src/query-tool/main.js'),
'tmtt-defect': resolve(__dirname, 'src/tmtt-defect/main.js'),
- 'qc-gate': resolve(__dirname, 'src/qc-gate/index.html')
+ 'qc-gate': resolve(__dirname, 'src/qc-gate/index.html'),
+ 'mid-section-defect': resolve(__dirname, 'src/mid-section-defect/index.html')
},
output: {
entryFileNames: '[name].js',
diff --git a/src/mes_dashboard/app.py b/src/mes_dashboard/app.py
index 1b2a5f4..146b164 100644
--- a/src/mes_dashboard/app.py
+++ b/src/mes_dashboard/app.py
@@ -488,6 +488,12 @@ def create_app(config_name: str | None = None) -> Flask:
dist_dir = os.path.join(app.static_folder or "", "dist")
return send_from_directory(dist_dir, 'qc-gate.html')
+ @app.route('/mid-section-defect')
+ def mid_section_defect_page():
+ """Mid-section defect traceability analysis page (pure Vite)."""
+ dist_dir = os.path.join(app.static_folder or "", "dist")
+ return send_from_directory(dist_dir, 'mid-section-defect.html')
+
# ========================================================
# Table Query APIs (for table_data_viewer)
# ========================================================
diff --git a/src/mes_dashboard/core/database.py b/src/mes_dashboard/core/database.py
index b119c02..bdbf279 100644
--- a/src/mes_dashboard/core/database.py
+++ b/src/mes_dashboard/core/database.py
@@ -559,6 +559,75 @@ def read_sql_df(sql: str, params: Optional[Dict[str, Any]] = None) -> pd.DataFra
raise
+def read_sql_df_slow(
+ sql: str,
+ params: Optional[Dict[str, Any]] = None,
+ timeout_seconds: int = 120,
+) -> pd.DataFrame:
+ """Execute a slow SQL query with a custom timeout via direct oracledb connection.
+
+ Unlike read_sql_df which uses the pooled engine (55s timeout),
+ this creates a dedicated connection with a longer call_timeout
+ for known-slow queries (e.g. full table scans on large tables).
+
+ Args:
+ sql: SQL query string with Oracle bind variables.
+ params: Optional dict of parameter values.
+ timeout_seconds: Call timeout in seconds (default: 120).
+
+ Returns:
+ DataFrame with query results, or None on connection failure.
+ """
+ start_time = time.time()
+ timeout_ms = timeout_seconds * 1000
+
+ conn = None
+ try:
+ runtime = get_db_runtime_config()
+ conn = oracledb.connect(
+ **DB_CONFIG,
+ tcp_connect_timeout=runtime["tcp_connect_timeout"],
+ retry_count=runtime["retry_count"],
+ retry_delay=runtime["retry_delay"],
+ )
+ conn.call_timeout = timeout_ms
+ logger.debug(
+ "Slow-query connection established (call_timeout_ms=%s)", timeout_ms
+ )
+
+ cursor = conn.cursor()
+ cursor.execute(sql, params or {})
+ columns = [desc[0].upper() for desc in cursor.description]
+ rows = cursor.fetchall()
+ cursor.close()
+
+ df = pd.DataFrame(rows, columns=columns)
+
+ elapsed = time.time() - start_time
+ if elapsed > 1.0:
+ sql_preview = sql.strip().replace('\n', ' ')[:100]
+ logger.warning(f"Slow query ({elapsed:.2f}s): {sql_preview}...")
+ else:
+ logger.debug(f"Query completed in {elapsed:.3f}s, rows={len(df)}")
+
+ return df
+
+ except Exception as exc:
+ elapsed = time.time() - start_time
+ ora_code = _extract_ora_code(exc)
+ sql_preview = sql.strip().replace('\n', ' ')[:100]
+ logger.error(
+ f"Query failed after {elapsed:.2f}s - ORA-{ora_code}: {exc} | SQL: {sql_preview}..."
+ )
+ raise
+ finally:
+ if conn:
+ try:
+ conn.close()
+ except Exception:
+ pass
+
+
# ============================================================
# Table Utilities
# ============================================================
diff --git a/src/mes_dashboard/routes/__init__.py b/src/mes_dashboard/routes/__init__.py
index 0ce5a98..fad3ec1 100644
--- a/src/mes_dashboard/routes/__init__.py
+++ b/src/mes_dashboard/routes/__init__.py
@@ -16,6 +16,7 @@ from .job_query_routes import job_query_bp
from .query_tool_routes import query_tool_bp
from .tmtt_defect_routes import tmtt_defect_bp
from .qc_gate_routes import qc_gate_bp
+from .mid_section_defect_routes import mid_section_defect_bp
def register_routes(app) -> None:
@@ -30,6 +31,7 @@ def register_routes(app) -> None:
app.register_blueprint(query_tool_bp)
app.register_blueprint(tmtt_defect_bp)
app.register_blueprint(qc_gate_bp)
+ app.register_blueprint(mid_section_defect_bp)
__all__ = [
'wip_bp',
@@ -44,5 +46,6 @@ __all__ = [
'query_tool_bp',
'tmtt_defect_bp',
'qc_gate_bp',
+ 'mid_section_defect_bp',
'register_routes',
]
diff --git a/src/mes_dashboard/routes/mid_section_defect_routes.py b/src/mes_dashboard/routes/mid_section_defect_routes.py
new file mode 100644
index 0000000..88e1366
--- /dev/null
+++ b/src/mes_dashboard/routes/mid_section_defect_routes.py
@@ -0,0 +1,160 @@
+# -*- coding: utf-8 -*-
+"""Mid-Section Defect Traceability Analysis API routes.
+
+Reverse traceability from TMTT (test) station back to upstream production stations.
+"""
+
+from flask import Blueprint, jsonify, request, Response
+
+from mes_dashboard.services.mid_section_defect_service import (
+ query_analysis,
+ query_analysis_detail,
+ query_all_loss_reasons,
+ export_csv,
+)
+
+mid_section_defect_bp = Blueprint(
+ 'mid_section_defect',
+ __name__,
+ url_prefix='/api/mid-section-defect'
+)
+
+
+@mid_section_defect_bp.route('/analysis', methods=['GET'])
+def api_analysis():
+ """API: Get mid-section defect traceability analysis (summary).
+
+ Returns kpi, charts, daily_trend, available_loss_reasons, genealogy_status,
+ and detail_total_count. Does NOT include the detail array — use
+ /analysis/detail for paginated detail data.
+
+ Query Parameters:
+ start_date: Start date (YYYY-MM-DD), required
+ end_date: End date (YYYY-MM-DD), required
+ loss_reasons: Comma-separated loss reason names, optional
+ """
+ start_date = request.args.get('start_date')
+ end_date = request.args.get('end_date')
+
+ if not start_date or not end_date:
+ return jsonify({
+ 'success': False,
+ 'error': '必須提供 start_date 和 end_date 參數'
+ }), 400
+
+ loss_reasons_str = request.args.get('loss_reasons', '')
+ loss_reasons = [r.strip() for r in loss_reasons_str.split(',') if r.strip()] or None
+
+ result = query_analysis(start_date, end_date, loss_reasons)
+
+ if result is None:
+ return jsonify({'success': False, 'error': '查詢失敗,請稍後再試'}), 500
+
+ if 'error' in result:
+ return jsonify({'success': False, 'error': result['error']}), 400
+
+ # Return summary only (no detail array) to keep response lightweight
+ summary = {
+ 'kpi': result.get('kpi'),
+ 'charts': result.get('charts'),
+ 'daily_trend': result.get('daily_trend'),
+ 'available_loss_reasons': result.get('available_loss_reasons'),
+ 'genealogy_status': result.get('genealogy_status'),
+ 'detail_total_count': len(result.get('detail', [])),
+ }
+
+ return jsonify({'success': True, 'data': summary})
+
+
+@mid_section_defect_bp.route('/analysis/detail', methods=['GET'])
+def api_analysis_detail():
+ """API: Get paginated detail table for mid-section defect analysis.
+
+ Query Parameters:
+ start_date: Start date (YYYY-MM-DD), required
+ end_date: End date (YYYY-MM-DD), required
+ loss_reasons: Comma-separated loss reason names, optional
+ page: Page number (default 1)
+ page_size: Records per page (default 200, max 500)
+ """
+ start_date = request.args.get('start_date')
+ end_date = request.args.get('end_date')
+
+ if not start_date or not end_date:
+ return jsonify({
+ 'success': False,
+ 'error': '必須提供 start_date 和 end_date 參數'
+ }), 400
+
+ loss_reasons_str = request.args.get('loss_reasons', '')
+ loss_reasons = [r.strip() for r in loss_reasons_str.split(',') if r.strip()] or None
+
+ page = max(request.args.get('page', 1, type=int), 1)
+ page_size = max(1, min(request.args.get('page_size', 200, type=int), 500))
+
+ result = query_analysis_detail(
+ start_date, end_date, loss_reasons,
+ page=page, page_size=page_size,
+ )
+
+ if result is None:
+ return jsonify({'success': False, 'error': '查詢失敗,請稍後再試'}), 500
+
+ if 'error' in result:
+ return jsonify({'success': False, 'error': result['error']}), 400
+
+ return jsonify({'success': True, 'data': result})
+
+
+@mid_section_defect_bp.route('/loss-reasons', methods=['GET'])
+def api_loss_reasons():
+ """API: Get all TMTT loss reasons (cached daily).
+
+ No parameters required — returns all loss reasons from last 180 days,
+ cached in Redis with 24h TTL for instant dropdown population.
+
+ Returns:
+ JSON with loss_reasons list.
+ """
+ result = query_all_loss_reasons()
+
+ if result is None:
+ return jsonify({'success': False, 'error': '查詢失敗,請稍後再試'}), 500
+
+ return jsonify({'success': True, 'data': result})
+
+
+@mid_section_defect_bp.route('/export', methods=['GET'])
+def api_export():
+ """API: Export mid-section defect detail data as CSV.
+
+ Query Parameters:
+ start_date: Start date (YYYY-MM-DD), required
+ end_date: End date (YYYY-MM-DD), required
+ loss_reasons: Comma-separated loss reason names, optional
+
+ Returns:
+ CSV file download.
+ """
+ start_date = request.args.get('start_date')
+ end_date = request.args.get('end_date')
+
+ if not start_date or not end_date:
+ return jsonify({
+ 'success': False,
+ 'error': '必須提供 start_date 和 end_date 參數'
+ }), 400
+
+ loss_reasons_str = request.args.get('loss_reasons', '')
+ loss_reasons = [r.strip() for r in loss_reasons_str.split(',') if r.strip()] or None
+
+ filename = f"mid_section_defect_{start_date}_to_{end_date}.csv"
+
+ return Response(
+ export_csv(start_date, end_date, loss_reasons),
+ mimetype='text/csv',
+ headers={
+ 'Content-Disposition': f'attachment; filename={filename}',
+ 'Content-Type': 'text/csv; charset=utf-8-sig'
+ }
+ )
diff --git a/src/mes_dashboard/services/mid_section_defect_service.py b/src/mes_dashboard/services/mid_section_defect_service.py
new file mode 100644
index 0000000..2843105
--- /dev/null
+++ b/src/mes_dashboard/services/mid_section_defect_service.py
@@ -0,0 +1,1163 @@
+# -*- coding: utf-8 -*-
+"""Mid-Section Defect Traceability Analysis Service.
+
+Reverse traceability from TMTT (test) station back to upstream production stations.
+Traces LOT genealogy (splits + merges) to attribute TMTT defects to upstream machines.
+
+Data Pipeline:
+ Query 1: TMTT Detection → TMTT lots with ALL loss reasons
+ Query 2a: Split Chain BFS → CONTAINER.SPLITFROMID upward traversal
+ Query 2b: Merge Expansion → COMBINEDASSYLOTS reverse merge lookup
+ Query 3: Upstream History → LOTWIPHISTORY for ALL ancestor CONTAINERIDs
+ Python: Resolve ancestors → Attribute defects → Aggregate
+
+Attribution Method (Sum):
+ For upstream machine M at station S:
+ attributed_rejectqty = SUM(TMTT REJECTQTY for all linked TMTT lots)
+ attributed_trackinqty = SUM(TMTT TRACKINQTY for all linked TMTT lots)
+ rate = attributed_rejectqty / attributed_trackinqty × 100
+"""
+
+import csv
+import io
+import logging
+import math
+from collections import defaultdict
+from datetime import datetime
+from typing import Optional, Dict, List, Any, Set, Tuple, Generator
+
+import pandas as pd
+
+from mes_dashboard.core.database import read_sql_df
+from mes_dashboard.core.cache import cache_get, cache_set, make_cache_key
+from mes_dashboard.sql import SQLLoader, QueryBuilder
+from mes_dashboard.config.workcenter_groups import get_workcenter_group
+
+logger = logging.getLogger('mes_dashboard.mid_section_defect')
+
+# Constants
+MAX_QUERY_DAYS = 180
+CACHE_TTL_TMTT = 300 # 5 min for TMTT detection data
+CACHE_TTL_LOSS_REASONS = 86400 # 24h for loss reason list (daily sync)
+ORACLE_IN_BATCH_SIZE = 1000 # Oracle IN clause limit
+
+# Mid-section workcenter group order range (成型 through 測試)
+MID_SECTION_ORDER_MIN = 4 # 成型
+MID_SECTION_ORDER_MAX = 11 # 測試
+
+# Top N for chart display (rest grouped as "其他")
+TOP_N = 10
+
+# Dimension column mapping for attribution charts
+DIMENSION_MAP = {
+ 'by_station': 'WORKCENTER_GROUP',
+ 'by_machine': 'EQUIPMENT_NAME',
+ 'by_workflow': 'WORKFLOW',
+ 'by_package': 'PRODUCTLINENAME',
+ 'by_pj_type': 'PJ_TYPE',
+ 'by_tmtt_machine': 'TMTT_EQUIPMENTNAME',
+}
+
+# CSV export column config
+CSV_COLUMNS = [
+ ('CONTAINERNAME', 'LOT ID'),
+ ('PJ_TYPE', 'TYPE'),
+ ('PRODUCTLINENAME', 'PACKAGE'),
+ ('WORKFLOW', 'WORKFLOW'),
+ ('FINISHEDRUNCARD', '完工流水碼'),
+ ('TMTT_EQUIPMENTNAME', 'TMTT設備'),
+ ('INPUT_QTY', '投入數'),
+ ('LOSS_REASON', '不良原因'),
+ ('DEFECT_QTY', '不良數'),
+ ('DEFECT_RATE', '不良率(%)'),
+ ('ANCESTOR_COUNT', '上游LOT數'),
+ ('UPSTREAM_MACHINES', '上游機台'),
+]
+
+
+# ============================================================
+# Public API
+# ============================================================
+
+def query_analysis(
+ start_date: str,
+ end_date: str,
+ loss_reasons: Optional[List[str]] = None,
+) -> Optional[Dict[str, Any]]:
+ """Main entry point for mid-section defect traceability analysis.
+
+ Args:
+ start_date: Start date (YYYY-MM-DD)
+ end_date: End date (YYYY-MM-DD)
+ loss_reasons: Optional list of loss reasons to filter (None = all)
+
+ Returns:
+ Dict with kpi, charts, detail, available_loss_reasons, genealogy_status.
+ """
+ error = _validate_date_range(start_date, end_date)
+ if error:
+ return {'error': error}
+
+ # Check full analysis cache
+ cache_key = make_cache_key(
+ "mid_section_defect",
+ filters={
+ 'start_date': start_date,
+ 'end_date': end_date,
+ 'loss_reasons': sorted(loss_reasons) if loss_reasons else None,
+ },
+ )
+ cached = cache_get(cache_key)
+ if cached is not None:
+ return cached
+
+ # Stage 1: TMTT detection data
+ tmtt_df = _fetch_tmtt_data(start_date, end_date)
+ if tmtt_df is None:
+ return None
+ if tmtt_df.empty:
+ return _empty_result()
+
+ # Extract available loss reasons before filtering
+ available_loss_reasons = sorted(
+ tmtt_df.loc[tmtt_df['REJECTQTY'] > 0, 'LOSSREASONNAME']
+ .dropna().unique().tolist()
+ )
+
+ # Apply loss reason filter if specified
+ if loss_reasons:
+ filtered_df = tmtt_df[
+ (tmtt_df['LOSSREASONNAME'].isin(loss_reasons))
+ | (tmtt_df['REJECTQTY'] == 0)
+ | (tmtt_df['LOSSREASONNAME'].isna())
+ ].copy()
+ else:
+ filtered_df = tmtt_df
+
+ # Stage 2: Genealogy resolution (split chain + merge expansion)
+ tmtt_cids = tmtt_df['CONTAINERID'].unique().tolist()
+ tmtt_names = {}
+ for _, r in tmtt_df.drop_duplicates('CONTAINERID').iterrows():
+ tmtt_names[r['CONTAINERID']] = _safe_str(r.get('CONTAINERNAME'))
+
+ ancestors = {}
+ genealogy_status = 'ready'
+
+ if tmtt_cids:
+ try:
+ ancestors = _resolve_full_genealogy(tmtt_cids, tmtt_names)
+ except Exception as exc:
+ logger.error(f"Genealogy resolution failed: {exc}", exc_info=True)
+ genealogy_status = 'error'
+
+ # Stage 3: Upstream history for ALL CIDs (TMTT lots + ancestors)
+ all_query_cids = set(tmtt_cids)
+ for anc_set in ancestors.values():
+ all_query_cids.update(anc_set)
+ # Filter out any non-string values (NaN/None from pandas)
+ all_query_cids = {c for c in all_query_cids if isinstance(c, str) and c}
+
+ upstream_by_cid = {}
+ if all_query_cids:
+ try:
+ upstream_by_cid = _fetch_upstream_history(list(all_query_cids))
+ except Exception as exc:
+ logger.error(f"Upstream history query failed: {exc}", exc_info=True)
+ genealogy_status = 'error'
+ tmtt_data = _build_tmtt_lookup(filtered_df)
+ attribution = _attribute_defects(
+ tmtt_data, ancestors, upstream_by_cid, loss_reasons,
+ )
+
+ result = {
+ 'kpi': _build_kpi(filtered_df, attribution, loss_reasons),
+ 'available_loss_reasons': available_loss_reasons,
+ 'charts': _build_all_charts(attribution, tmtt_data),
+ 'daily_trend': _build_daily_trend(filtered_df, loss_reasons),
+ 'detail': _build_detail_table(filtered_df, ancestors, upstream_by_cid),
+ 'genealogy_status': genealogy_status,
+ }
+
+ # Only cache successful results (don't cache upstream errors)
+ if genealogy_status == 'ready':
+ cache_set(cache_key, result, ttl=CACHE_TTL_TMTT)
+ return result
+
+
+def query_analysis_detail(
+ start_date: str,
+ end_date: str,
+ loss_reasons: Optional[List[str]] = None,
+ page: int = 1,
+ page_size: int = 200,
+) -> Optional[Dict[str, Any]]:
+ """Return a paginated slice of the detail table from cached analysis.
+
+ Calls query_analysis() which handles caching internally.
+ Sorts detail by DEFECT_RATE descending (worst first) before paginating.
+ """
+ result = query_analysis(start_date, end_date, loss_reasons)
+ if result is None:
+ return None
+ if 'error' in result:
+ return result
+
+ detail = result.get('detail', [])
+ detail_sorted = sorted(
+ detail, key=lambda r: r.get('DEFECT_RATE', 0), reverse=True,
+ )
+
+ total_count = len(detail_sorted)
+ total_pages = max(1, (total_count + page_size - 1) // page_size)
+ page = max(1, min(page, total_pages))
+ offset = (page - 1) * page_size
+
+ return {
+ 'detail': detail_sorted[offset:offset + page_size],
+ 'pagination': {
+ 'page': page,
+ 'page_size': page_size,
+ 'total_count': total_count,
+ 'total_pages': total_pages,
+ },
+ }
+
+
+def query_all_loss_reasons() -> Optional[Dict[str, Any]]:
+ """Get all TMTT loss reasons (cached daily in Redis).
+
+ Lightweight query: DISTINCT LOSSREASONNAME from last 180 days.
+ Cached with 24h TTL — suitable for dropdown population on page load.
+
+ Returns:
+ Dict with 'loss_reasons' list, or None on failure.
+ """
+ cache_key = make_cache_key("mid_section_loss_reasons")
+ cached = cache_get(cache_key)
+ if cached is not None:
+ return cached
+
+ try:
+ sql = SQLLoader.load("mid_section_defect/all_loss_reasons")
+ df = read_sql_df(sql, {})
+ if df is None:
+ logger.error("Loss reasons query returned None")
+ return None
+
+ reasons = sorted(df['LOSSREASONNAME'].dropna().unique().tolist())
+ result = {'loss_reasons': reasons}
+ logger.info(f"Loss reasons: {len(reasons)} distinct values cached (24h TTL)")
+ cache_set(cache_key, result, ttl=CACHE_TTL_LOSS_REASONS)
+ return result
+ except Exception as exc:
+ logger.error(f"Loss reasons query failed: {exc}", exc_info=True)
+ return None
+
+
+def export_csv(
+ start_date: str,
+ end_date: str,
+ loss_reasons: Optional[List[str]] = None,
+) -> Generator[str, None, None]:
+ """Stream CSV export of detail data.
+
+ Yields:
+ CSV lines as strings.
+ """
+ result = query_analysis(start_date, end_date, loss_reasons)
+
+ # BOM for Excel UTF-8 compatibility
+ yield '\ufeff'
+
+ output = io.StringIO()
+ writer = csv.writer(output)
+
+ writer.writerow([label for _, label in CSV_COLUMNS])
+ yield output.getvalue()
+ output.seek(0)
+ output.truncate(0)
+
+ if result is None or 'error' in result:
+ return
+
+ for row in result.get('detail', []):
+ writer.writerow([row.get(col, '') for col, _ in CSV_COLUMNS])
+ yield output.getvalue()
+ output.seek(0)
+ output.truncate(0)
+
+
+# ============================================================
+# Helpers
+# ============================================================
+
+def _safe_str(v, default=''):
+ if v is None or (isinstance(v, float) and math.isnan(v)):
+ return default
+ try:
+ if pd.isna(v):
+ return default
+ except (TypeError, ValueError):
+ pass
+ return str(v)
+
+
+def _safe_float(v, default=0.0):
+ if v is None:
+ return default
+ try:
+ f = float(v)
+ if math.isnan(f) or math.isinf(f):
+ return default
+ return f
+ except (TypeError, ValueError):
+ return default
+
+
+def _safe_int(v, default=0):
+ return int(_safe_float(v, float(default)))
+
+
+def _empty_result() -> Dict[str, Any]:
+ return {
+ 'kpi': {
+ 'total_input': 0, 'lot_count': 0,
+ 'total_defect_qty': 0, 'total_defect_rate': 0.0,
+ 'top_loss_reason': '', 'affected_machine_count': 0,
+ },
+ 'available_loss_reasons': [],
+ 'charts': {k: [] for k in DIMENSION_MAP},
+ 'daily_trend': [],
+ 'detail': [],
+ 'genealogy_status': 'ready',
+ }
+
+
+# ============================================================
+# Validation
+# ============================================================
+
+def _validate_date_range(start_date: str, end_date: str) -> Optional[str]:
+ try:
+ start = datetime.strptime(start_date, '%Y-%m-%d')
+ end = datetime.strptime(end_date, '%Y-%m-%d')
+ except (ValueError, TypeError):
+ return '日期格式無效,請使用 YYYY-MM-DD'
+
+ if start > end:
+ return '起始日期不能晚於結束日期'
+
+ if (end - start).days > MAX_QUERY_DAYS:
+ return f'查詢範圍不能超過 {MAX_QUERY_DAYS} 天'
+
+ return None
+
+
+# ============================================================
+# Query 1: TMTT Detection Data
+# ============================================================
+
+def _fetch_tmtt_data(start_date: str, end_date: str) -> Optional[pd.DataFrame]:
+ """Execute tmtt_detection.sql and return raw DataFrame."""
+ cache_key = make_cache_key(
+ "mid_section_tmtt",
+ filters={'start_date': start_date, 'end_date': end_date},
+ )
+ cached = cache_get(cache_key)
+ if cached is not None:
+ # Cache stores list-of-dicts (JSON-serializable), reconstruct DataFrame
+ if isinstance(cached, list):
+ return pd.DataFrame(cached) if cached else pd.DataFrame()
+ return None
+
+ try:
+ sql = SQLLoader.load("mid_section_defect/tmtt_detection")
+ params = {'start_date': start_date, 'end_date': end_date}
+ df = read_sql_df(sql, params)
+ if df is None:
+ logger.error("TMTT detection query returned None")
+ return None
+ logger.info(
+ f"TMTT detection: {len(df)} rows, "
+ f"{df['CONTAINERID'].nunique() if not df.empty else 0} unique lots"
+ )
+ # Cache as list-of-dicts for JSON serialization via Redis
+ cache_set(cache_key, df.to_dict('records'), ttl=CACHE_TTL_TMTT)
+ return df
+ except Exception as exc:
+ logger.error(f"TMTT detection query failed: {exc}", exc_info=True)
+ return None
+
+
+# ============================================================
+# Query 2: LOT Genealogy
+# ============================================================
+
+def _resolve_full_genealogy(
+ tmtt_cids: List[str],
+ tmtt_names: Dict[str, str],
+) -> Dict[str, Set[str]]:
+ """Resolve full genealogy for TMTT lots via SPLITFROMID + COMBINEDASSYLOTS.
+
+ Step 1: BFS upward through DW_MES_CONTAINER.SPLITFROMID
+ Step 2: Merge expansion via DW_MES_PJ_COMBINEDASSYLOTS
+ Step 3: BFS on merge source CIDs (one more round)
+
+ Args:
+ tmtt_cids: TMTT lot CONTAINERIDs
+ tmtt_names: {cid: containername} from TMTT detection data
+
+ Returns:
+ {tmtt_cid: set(all ancestor CIDs)}
+ """
+ # ---- Step 1: Split chain BFS upward ----
+ child_to_parent, cid_to_name = _bfs_split_chain(tmtt_cids, tmtt_names)
+
+ # Build initial ancestor sets per TMTT lot (walk up split chain)
+ ancestors: Dict[str, Set[str]] = {}
+ for tmtt_cid in tmtt_cids:
+ visited: Set[str] = set()
+ current = tmtt_cid
+ while current in child_to_parent:
+ parent = child_to_parent[current]
+ if parent in visited:
+ break # cycle protection
+ visited.add(parent)
+ current = parent
+ ancestors[tmtt_cid] = visited
+
+ # ---- Step 2: Merge expansion via COMBINEDASSYLOTS ----
+ all_names = set(cid_to_name.values())
+ if not all_names:
+ _log_genealogy_summary(ancestors, tmtt_cids, 0)
+ return ancestors
+
+ merge_source_map = _fetch_merge_sources(list(all_names))
+ if not merge_source_map:
+ _log_genealogy_summary(ancestors, tmtt_cids, 0)
+ return ancestors
+
+ # Reverse map: name → set of CIDs with that name
+ name_to_cids: Dict[str, Set[str]] = defaultdict(set)
+ for cid, name in cid_to_name.items():
+ name_to_cids[name].add(cid)
+
+ # Expand ancestors with merge sources
+ merge_source_cids_all: Set[str] = set()
+ for tmtt_cid in tmtt_cids:
+ self_and_ancestors = ancestors[tmtt_cid] | {tmtt_cid}
+ for cid in list(self_and_ancestors):
+ name = cid_to_name.get(cid)
+ if name and name in merge_source_map:
+ for src_cid in merge_source_map[name]:
+ if src_cid != cid and src_cid not in self_and_ancestors:
+ ancestors[tmtt_cid].add(src_cid)
+ merge_source_cids_all.add(src_cid)
+
+ # ---- Step 3: BFS on merge source CIDs ----
+ seen = set(tmtt_cids) | set(child_to_parent.values()) | set(child_to_parent.keys())
+ new_merge_cids = list(merge_source_cids_all - seen)
+ if new_merge_cids:
+ merge_c2p, _ = _bfs_split_chain(new_merge_cids, {})
+ child_to_parent.update(merge_c2p)
+
+ # Walk up merge sources' split chains for each TMTT lot
+ for tmtt_cid in tmtt_cids:
+ for merge_cid in list(ancestors[tmtt_cid] & merge_source_cids_all):
+ current = merge_cid
+ while current in merge_c2p:
+ parent = merge_c2p[current]
+ if parent in ancestors[tmtt_cid]:
+ break
+ ancestors[tmtt_cid].add(parent)
+ current = parent
+
+ _log_genealogy_summary(ancestors, tmtt_cids, len(merge_source_cids_all))
+ return ancestors
+
+
+def _bfs_split_chain(
+ start_cids: List[str],
+ initial_names: Dict[str, str],
+) -> Tuple[Dict[str, str], Dict[str, str]]:
+ """BFS upward through DW_MES_CONTAINER.SPLITFROMID.
+
+ Args:
+ start_cids: Starting CONTAINERIDs
+ initial_names: Pre-known {cid: containername} mappings
+
+ Returns:
+ child_to_parent: {child_cid: parent_cid} for all split edges
+ cid_to_name: {cid: containername} for all encountered CIDs
+ """
+ child_to_parent: Dict[str, str] = {}
+ cid_to_name: Dict[str, str] = dict(initial_names)
+ seen: Set[str] = set(start_cids)
+ frontier = list(start_cids)
+ bfs_round = 0
+
+ while frontier:
+ bfs_round += 1
+ batch_results: List[Dict[str, Any]] = []
+
+ for i in range(0, len(frontier), ORACLE_IN_BATCH_SIZE):
+ batch = frontier[i:i + ORACLE_IN_BATCH_SIZE]
+ builder = QueryBuilder()
+ builder.add_in_condition("c.CONTAINERID", batch)
+ sql = SQLLoader.load_with_params(
+ "mid_section_defect/split_chain",
+ CID_FILTER=builder.get_conditions_sql(),
+ )
+ try:
+ df = read_sql_df(sql, builder.params)
+ if df is not None and not df.empty:
+ batch_results.extend(df.to_dict('records'))
+ except Exception as exc:
+ logger.warning(f"Split chain BFS round {bfs_round} batch failed: {exc}")
+
+ new_parents: Set[str] = set()
+ for row in batch_results:
+ cid = row['CONTAINERID']
+ split_from = row.get('SPLITFROMID')
+ name = row.get('CONTAINERNAME')
+
+ if isinstance(name, str) and name:
+ cid_to_name[cid] = name
+ if isinstance(split_from, str) and split_from and cid != split_from:
+ child_to_parent[cid] = split_from
+ if split_from not in seen:
+ new_parents.add(split_from)
+ seen.add(split_from)
+
+ frontier = list(new_parents)
+ if bfs_round > 20:
+ logger.warning("Split chain BFS exceeded 20 rounds, stopping")
+ break
+
+ logger.info(
+ f"Split chain BFS: {bfs_round} rounds, "
+ f"{len(child_to_parent)} split edges, "
+ f"{len(cid_to_name)} names collected"
+ )
+ return child_to_parent, cid_to_name
+
+
+def _fetch_merge_sources(
+ finished_names: List[str],
+) -> Dict[str, List[str]]:
+ """Find source lots merged into finished lots via COMBINEDASSYLOTS.
+
+ Args:
+ finished_names: CONTAINERNAMEs to look up as FINISHEDNAME
+
+ Returns:
+ {finished_name: [source_cid, ...]}
+ """
+ result: Dict[str, List[str]] = {}
+
+ for i in range(0, len(finished_names), ORACLE_IN_BATCH_SIZE):
+ batch = finished_names[i:i + ORACLE_IN_BATCH_SIZE]
+ builder = QueryBuilder()
+ builder.add_in_condition("ca.FINISHEDNAME", batch)
+ sql = SQLLoader.load_with_params(
+ "mid_section_defect/merge_lookup",
+ FINISHED_NAME_FILTER=builder.get_conditions_sql(),
+ )
+ try:
+ df = read_sql_df(sql, builder.params)
+ if df is not None and not df.empty:
+ for _, row in df.iterrows():
+ fn = row['FINISHEDNAME']
+ src = row['SOURCE_CID']
+ if isinstance(fn, str) and fn and isinstance(src, str) and src:
+ result.setdefault(fn, []).append(src)
+ except Exception as exc:
+ logger.warning(f"Merge lookup batch failed: {exc}")
+
+ if result:
+ total_sources = sum(len(v) for v in result.values())
+ logger.info(
+ f"Merge lookup: {len(result)} finished names → {total_sources} source CIDs"
+ )
+ return result
+
+
+def _log_genealogy_summary(
+ ancestors: Dict[str, Set[str]],
+ tmtt_cids: List[str],
+ merge_count: int,
+) -> None:
+ total_ancestors = sum(len(v) for v in ancestors.values())
+ lots_with_ancestors = sum(1 for v in ancestors.values() if v)
+ logger.info(
+ f"Genealogy resolved: {lots_with_ancestors}/{len(tmtt_cids)} lots have ancestors, "
+ f"{total_ancestors} total ancestor links, "
+ f"{merge_count} merge sources"
+ )
+
+
+# ============================================================
+# Query 3: Upstream Production History
+# ============================================================
+
+def _fetch_upstream_history(
+ all_cids: List[str],
+) -> Dict[str, List[Dict[str, Any]]]:
+ """Fetch upstream production history for ancestor CONTAINERIDs.
+
+ Batches queries to respect Oracle IN clause limit.
+ Filters by mid-section workcenter groups (order 4-11) in Python.
+
+ Returns:
+ {containerid: [{'workcenter_group': ..., 'equipment_name': ..., ...}, ...]}
+ """
+ if not all_cids:
+ return {}
+
+ unique_cids = list(set(all_cids))
+ all_rows = []
+
+ # Batch query in chunks of ORACLE_IN_BATCH_SIZE
+ for i in range(0, len(unique_cids), ORACLE_IN_BATCH_SIZE):
+ batch = unique_cids[i:i + ORACLE_IN_BATCH_SIZE]
+
+ builder = QueryBuilder()
+ builder.add_in_condition("h.CONTAINERID", batch)
+ conditions_sql = builder.get_conditions_sql()
+ params = builder.params
+
+ sql = SQLLoader.load_with_params(
+ "mid_section_defect/upstream_history",
+ ANCESTOR_FILTER=conditions_sql,
+ )
+
+ try:
+ df = read_sql_df(sql, params)
+ if df is not None and not df.empty:
+ all_rows.append(df)
+ except Exception as exc:
+ logger.error(
+ f"Upstream history batch {i//ORACLE_IN_BATCH_SIZE + 1} failed: {exc}",
+ exc_info=True,
+ )
+
+ if not all_rows:
+ return {}
+
+ combined = pd.concat(all_rows, ignore_index=True)
+
+ # Filter by mid-section workcenter groups in Python
+ result: Dict[str, List[Dict[str, Any]]] = defaultdict(list)
+ for _, row in combined.iterrows():
+ wc_name = row.get('WORKCENTERNAME', '')
+ group_name, order = get_workcenter_group(wc_name)
+ if group_name is None or order < MID_SECTION_ORDER_MIN or order > MID_SECTION_ORDER_MAX:
+ continue
+
+ cid = row['CONTAINERID']
+ result[cid].append({
+ 'workcenter_group': group_name,
+ 'workcenter_group_order': order,
+ 'equipment_id': _safe_str(row.get('EQUIPMENTID')),
+ 'equipment_name': _safe_str(row.get('EQUIPMENTNAME')),
+ 'spec_name': _safe_str(row.get('SPECNAME')),
+ 'track_in_time': _safe_str(row.get('TRACKINTIMESTAMP')),
+ })
+
+ logger.info(
+ f"Upstream history: {len(result)} lots with mid-section records, "
+ f"from {len(unique_cids)} queried CIDs"
+ )
+ return dict(result)
+
+
+# ============================================================
+# TMTT Data Lookup
+# ============================================================
+
+def _build_tmtt_lookup(
+ df: pd.DataFrame,
+) -> Dict[str, Dict[str, Any]]:
+ """Build lookup dict from TMTT DataFrame.
+
+ Returns:
+ {containerid: {
+ 'trackinqty': int,
+ 'rejectqty_by_reason': {reason: qty},
+ 'containername': str,
+ 'workflow': str,
+ 'productlinename': str,
+ 'pj_type': str,
+ 'tmtt_equipmentname': str,
+ 'trackintimestamp': str,
+ }}
+ """
+ if df.empty:
+ return {}
+
+ lookup: Dict[str, Dict[str, Any]] = {}
+ for _, row in df.iterrows():
+ cid = row['CONTAINERID']
+ if cid not in lookup:
+ lookup[cid] = {
+ 'trackinqty': _safe_int(row.get('TRACKINQTY')),
+ 'rejectqty_by_reason': {},
+ 'containername': _safe_str(row.get('CONTAINERNAME')),
+ 'workflow': _safe_str(row.get('WORKFLOW')),
+ 'productlinename': _safe_str(row.get('PRODUCTLINENAME')),
+ 'pj_type': _safe_str(row.get('PJ_TYPE')),
+ 'tmtt_equipmentname': _safe_str(row.get('TMTT_EQUIPMENTNAME')),
+ 'trackintimestamp': _safe_str(row.get('TRACKINTIMESTAMP')),
+ }
+
+ reason = row.get('LOSSREASONNAME')
+ qty = _safe_int(row.get('REJECTQTY'))
+ if reason and qty > 0:
+ lookup[cid]['rejectqty_by_reason'][reason] = (
+ lookup[cid]['rejectqty_by_reason'].get(reason, 0) + qty
+ )
+
+ return lookup
+
+
+# ============================================================
+# Defect Attribution Engine
+# ============================================================
+
+def _attribute_defects(
+ tmtt_data: Dict[str, Dict[str, Any]],
+ ancestors: Dict[str, Set[str]],
+ upstream_by_cid: Dict[str, List[Dict[str, Any]]],
+ loss_reasons: Optional[List[str]] = None,
+) -> List[Dict[str, Any]]:
+ """Attribute TMTT defects to upstream machines.
+
+ For each upstream machine M at station S:
+ - Find all TMTT lots whose ancestors (or self) used M
+ - attributed_rejectqty = SUM(selected REJECTQTY)
+ - attributed_trackinqty = SUM(TRACKINQTY)
+ - rate = attributed_rejectqty / attributed_trackinqty × 100
+
+ Returns:
+ List of attribution records, one per (workcenter_group, equipment_name).
+ """
+ # machine_key → set of TMTT lot CIDs
+ machine_to_tmtt: Dict[Tuple[str, str, str], Set[str]] = defaultdict(set)
+
+ for tmtt_cid, data in tmtt_data.items():
+ ancestor_set = ancestors.get(tmtt_cid, set())
+ # Include the TMTT lot itself (it may have upstream history if no split)
+ all_cids = ancestor_set | {tmtt_cid}
+
+ for anc_cid in all_cids:
+ for record in upstream_by_cid.get(anc_cid, []):
+ machine_key = (
+ record['workcenter_group'],
+ record['equipment_name'],
+ record['equipment_id'],
+ )
+ machine_to_tmtt[machine_key].add(tmtt_cid)
+
+ # Calculate attribution per machine
+ attribution = []
+ for machine_key, tmtt_lot_set in machine_to_tmtt.items():
+ wc_group, eq_name, eq_id = machine_key
+
+ total_trackinqty = sum(
+ tmtt_data[cid]['trackinqty'] for cid in tmtt_lot_set
+ if cid in tmtt_data
+ )
+
+ # Sum defects for selected loss reasons
+ total_rejectqty = 0
+ for cid in tmtt_lot_set:
+ if cid not in tmtt_data:
+ continue
+ by_reason = tmtt_data[cid]['rejectqty_by_reason']
+ if loss_reasons:
+ for reason in loss_reasons:
+ total_rejectqty += by_reason.get(reason, 0)
+ else:
+ total_rejectqty += sum(by_reason.values())
+
+ rate = round(total_rejectqty / total_trackinqty * 100, 4) if total_trackinqty else 0.0
+
+ # Collect dimension metadata from linked TMTT lots
+ workflows = set()
+ packages = set()
+ pj_types = set()
+ tmtt_machines = set()
+ for cid in tmtt_lot_set:
+ if cid not in tmtt_data:
+ continue
+ d = tmtt_data[cid]
+ if d['workflow']:
+ workflows.add(d['workflow'])
+ if d['productlinename']:
+ packages.add(d['productlinename'])
+ if d['pj_type']:
+ pj_types.add(d['pj_type'])
+ if d['tmtt_equipmentname']:
+ tmtt_machines.add(d['tmtt_equipmentname'])
+
+ attribution.append({
+ 'WORKCENTER_GROUP': wc_group,
+ 'EQUIPMENT_NAME': eq_name,
+ 'EQUIPMENT_ID': eq_id,
+ 'TMTT_LOT_COUNT': len(tmtt_lot_set),
+ 'INPUT_QTY': total_trackinqty,
+ 'DEFECT_QTY': total_rejectqty,
+ 'DEFECT_RATE': rate,
+ # Flatten multi-valued dimensions for charting
+ 'WORKFLOW': ', '.join(sorted(workflows)) if workflows else '(未知)',
+ 'PRODUCTLINENAME': ', '.join(sorted(packages)) if packages else '(未知)',
+ 'PJ_TYPE': ', '.join(sorted(pj_types)) if pj_types else '(未知)',
+ 'TMTT_EQUIPMENTNAME': ', '.join(sorted(tmtt_machines)) if tmtt_machines else '(未知)',
+ })
+
+ # Sort by defect rate DESC
+ attribution.sort(key=lambda x: x['DEFECT_RATE'], reverse=True)
+
+ return attribution
+
+
+# ============================================================
+# KPI Builder
+# ============================================================
+
+def _build_kpi(
+ df: pd.DataFrame,
+ attribution: List[Dict[str, Any]],
+ loss_reasons: Optional[List[str]] = None,
+) -> Dict[str, Any]:
+ """Build KPI summary."""
+ if df.empty:
+ return {
+ 'total_input': 0, 'lot_count': 0,
+ 'total_defect_qty': 0, 'total_defect_rate': 0.0,
+ 'top_loss_reason': '', 'affected_machine_count': 0,
+ }
+
+ # Deduplicate for INPUT
+ unique_lots = df.drop_duplicates(subset=['CONTAINERID'])
+ total_input = int(unique_lots['TRACKINQTY'].sum())
+ lot_count = len(unique_lots)
+
+ # Defect totals
+ defect_rows = df[df['REJECTQTY'] > 0]
+ if loss_reasons:
+ defect_rows = defect_rows[defect_rows['LOSSREASONNAME'].isin(loss_reasons)]
+
+ total_defect_qty = int(defect_rows['REJECTQTY'].sum()) if not defect_rows.empty else 0
+ total_defect_rate = round(
+ total_defect_qty / total_input * 100, 4
+ ) if total_input else 0.0
+
+ # Top loss reason
+ top_reason = ''
+ if not defect_rows.empty:
+ reason_sums = defect_rows.groupby('LOSSREASONNAME')['REJECTQTY'].sum()
+ if not reason_sums.empty:
+ top_reason = _safe_str(reason_sums.idxmax())
+
+ # Count unique upstream machines with defects attributed
+ affected_machines = sum(1 for a in attribution if a['DEFECT_QTY'] > 0)
+
+ return {
+ 'total_input': total_input,
+ 'lot_count': lot_count,
+ 'total_defect_qty': total_defect_qty,
+ 'total_defect_rate': total_defect_rate,
+ 'top_loss_reason': top_reason,
+ 'affected_machine_count': affected_machines,
+ }
+
+
+# ============================================================
+# Chart Builders
+# ============================================================
+
+def _build_chart_data(
+ records: List[Dict[str, Any]],
+ dimension: str,
+) -> List[Dict[str, Any]]:
+ """Build Top N + Other Pareto chart data for a given dimension.
+
+ Groups attribution records by dimension, sums defect qty, takes top N,
+ groups rest as "其他".
+ """
+ if not records:
+ return []
+
+ # Aggregate by dimension
+ dim_agg: Dict[str, Dict[str, Any]] = defaultdict(
+ lambda: {'input_qty': 0, 'defect_qty': 0, 'lot_count': 0}
+ )
+ for rec in records:
+ key = rec.get(dimension, '(未知)') or '(未知)'
+ # For multi-valued dimensions (comma-separated), split and attribute to each
+ if ',' in key:
+ keys = [k.strip() for k in key.split(',')]
+ else:
+ keys = [key]
+ for k in keys:
+ dim_agg[k]['input_qty'] += rec['INPUT_QTY']
+ dim_agg[k]['defect_qty'] += rec['DEFECT_QTY']
+ dim_agg[k]['lot_count'] += rec['TMTT_LOT_COUNT']
+
+ # Sort by defect qty DESC
+ sorted_items = sorted(dim_agg.items(), key=lambda x: x[1]['defect_qty'], reverse=True)
+
+ # Top N + Other
+ items = []
+ other = {'input_qty': 0, 'defect_qty': 0, 'lot_count': 0}
+ for i, (name, data) in enumerate(sorted_items):
+ if i < TOP_N:
+ rate = round(data['defect_qty'] / data['input_qty'] * 100, 4) if data['input_qty'] else 0.0
+ items.append({
+ 'name': name,
+ 'input_qty': data['input_qty'],
+ 'defect_qty': data['defect_qty'],
+ 'defect_rate': rate,
+ 'lot_count': data['lot_count'],
+ })
+ else:
+ other['input_qty'] += data['input_qty']
+ other['defect_qty'] += data['defect_qty']
+ other['lot_count'] += data['lot_count']
+
+ if other['defect_qty'] > 0 or other['input_qty'] > 0:
+ rate = round(other['defect_qty'] / other['input_qty'] * 100, 4) if other['input_qty'] else 0.0
+ items.append({
+ 'name': '其他',
+ 'input_qty': other['input_qty'],
+ 'defect_qty': other['defect_qty'],
+ 'defect_rate': rate,
+ 'lot_count': other['lot_count'],
+ })
+
+ # Add cumulative percentage
+ total_defects = sum(item['defect_qty'] for item in items)
+ cumsum = 0
+ for item in items:
+ cumsum += item['defect_qty']
+ item['cumulative_pct'] = round(cumsum / total_defects * 100, 2) if total_defects else 0.0
+
+ return items
+
+
+def _build_loss_reason_chart(
+ df: pd.DataFrame,
+ loss_reasons: Optional[List[str]] = None,
+) -> List[Dict[str, Any]]:
+ """Build loss reason distribution chart from TMTT data (not attribution)."""
+ if df.empty:
+ return []
+
+ defect_rows = df[df['REJECTQTY'] > 0].copy()
+ if loss_reasons:
+ defect_rows = defect_rows[defect_rows['LOSSREASONNAME'].isin(loss_reasons)]
+
+ if defect_rows.empty:
+ return []
+
+ # Aggregate by loss reason
+ reason_agg = defect_rows.groupby('LOSSREASONNAME')['REJECTQTY'].sum()
+ reason_agg = reason_agg.sort_values(ascending=False)
+
+ # Deduplicated input per loss reason
+ unique_lots = df.drop_duplicates(subset=['CONTAINERID'])
+ total_input = int(unique_lots['TRACKINQTY'].sum())
+
+ items = []
+ total_defects = int(reason_agg.sum())
+ cumsum = 0
+ for reason, qty in reason_agg.items():
+ qty_int = int(qty)
+ cumsum += qty_int
+ rate = round(qty_int / total_input * 100, 4) if total_input else 0.0
+ items.append({
+ 'name': _safe_str(reason),
+ 'defect_qty': qty_int,
+ 'defect_rate': rate,
+ 'cumulative_pct': round(cumsum / total_defects * 100, 2) if total_defects else 0.0,
+ })
+
+ return items
+
+
+def _build_all_charts(
+ attribution: List[Dict[str, Any]],
+ tmtt_data: Dict[str, Dict[str, Any]],
+) -> Dict[str, List[Dict]]:
+ """Build chart data for all dimensions."""
+ charts = {}
+ for key, dim_col in DIMENSION_MAP.items():
+ charts[key] = _build_chart_data(attribution, dim_col)
+
+ # Loss reason chart is built from TMTT data directly (not attribution)
+ # Reconstruct a minimal df from tmtt_data for the loss reason chart
+ loss_rows = []
+ for cid, data in tmtt_data.items():
+ trackinqty = data['trackinqty']
+ if data['rejectqty_by_reason']:
+ for reason, qty in data['rejectqty_by_reason'].items():
+ loss_rows.append({
+ 'CONTAINERID': cid,
+ 'TRACKINQTY': trackinqty,
+ 'LOSSREASONNAME': reason,
+ 'REJECTQTY': qty,
+ })
+ else:
+ loss_rows.append({
+ 'CONTAINERID': cid,
+ 'TRACKINQTY': trackinqty,
+ 'LOSSREASONNAME': None,
+ 'REJECTQTY': 0,
+ })
+ if loss_rows:
+ loss_df = pd.DataFrame(loss_rows)
+ charts['by_loss_reason'] = _build_loss_reason_chart(loss_df)
+ else:
+ charts['by_loss_reason'] = []
+
+ return charts
+
+
+# ============================================================
+# Daily Trend
+# ============================================================
+
+def _build_daily_trend(
+ df: pd.DataFrame,
+ loss_reasons: Optional[List[str]] = None,
+) -> List[Dict[str, Any]]:
+ """Build daily defect rate trend data."""
+ if df.empty:
+ return []
+
+ work_df = df.copy()
+ work_df['DATE'] = pd.to_datetime(work_df['TRACKINTIMESTAMP']).dt.strftime('%Y-%m-%d')
+
+ # Daily INPUT (deduplicated by CONTAINERID per date)
+ daily_input = (
+ work_df.drop_duplicates(subset=['CONTAINERID', 'DATE'])
+ .groupby('DATE')['TRACKINQTY']
+ .sum()
+ )
+
+ # Daily defects
+ defect_rows = work_df[work_df['REJECTQTY'] > 0]
+ if loss_reasons:
+ defect_rows = defect_rows[defect_rows['LOSSREASONNAME'].isin(loss_reasons)]
+
+ daily_defects = (
+ defect_rows.groupby('DATE')['REJECTQTY'].sum()
+ if not defect_rows.empty
+ else pd.Series(dtype=float)
+ )
+
+ combined = pd.DataFrame({
+ 'input_qty': daily_input,
+ 'defect_qty': daily_defects,
+ }).fillna(0).astype({'defect_qty': int, 'input_qty': int})
+
+ combined['defect_rate'] = (
+ combined['defect_qty'] / combined['input_qty'] * 100
+ ).round(4).where(combined['input_qty'] > 0, 0.0)
+
+ combined = combined.sort_index()
+
+ result = []
+ for date, row in combined.iterrows():
+ result.append({
+ 'date': str(date),
+ 'input_qty': _safe_int(row['input_qty']),
+ 'defect_qty': _safe_int(row['defect_qty']),
+ 'defect_rate': _safe_float(row['defect_rate']),
+ })
+
+ return result
+
+
+# ============================================================
+# Detail Table
+# ============================================================
+
+def _build_detail_table(
+ df: pd.DataFrame,
+ ancestors: Dict[str, Set[str]],
+ upstream_by_cid: Dict[str, List[Dict[str, Any]]],
+) -> List[Dict[str, Any]]:
+ """Build LOT-level detail table with upstream machine info."""
+ if df.empty:
+ return []
+
+ # Unique LOT info
+ lot_cols = [
+ 'CONTAINERID', 'CONTAINERNAME', 'PJ_TYPE', 'PRODUCTLINENAME',
+ 'WORKFLOW', 'FINISHEDRUNCARD', 'TMTT_EQUIPMENTNAME', 'TRACKINQTY',
+ ]
+ lots = df.drop_duplicates(subset=['CONTAINERID'])[
+ [c for c in lot_cols if c in df.columns]
+ ].copy()
+
+ # Aggregate defects per LOT per loss reason
+ defect_rows = df[df['REJECTQTY'] > 0]
+ lot_defects: Dict[str, Dict[str, int]] = defaultdict(lambda: defaultdict(int))
+ for _, row in defect_rows.iterrows():
+ cid = row['CONTAINERID']
+ reason = _safe_str(row.get('LOSSREASONNAME'))
+ qty = _safe_int(row.get('REJECTQTY'))
+ if reason and qty > 0:
+ lot_defects[cid][reason] += qty
+
+ result = []
+ for _, row in lots.iterrows():
+ cid = row['CONTAINERID']
+ input_qty = _safe_int(row.get('TRACKINQTY'))
+ ancestor_set = ancestors.get(cid, set())
+ all_cids = ancestor_set | {cid}
+
+ # Collect upstream machines
+ upstream_machines = set()
+ for anc_cid in all_cids:
+ for rec in upstream_by_cid.get(anc_cid, []):
+ upstream_machines.add(f"{rec['workcenter_group']}/{rec['equipment_name']}")
+
+ # Build one row per loss reason for this LOT
+ reasons = lot_defects.get(cid, {})
+ if reasons:
+ for reason, qty in sorted(reasons.items()):
+ rate = round(qty / input_qty * 100, 4) if input_qty else 0.0
+ result.append({
+ 'CONTAINERNAME': _safe_str(row.get('CONTAINERNAME')),
+ 'PJ_TYPE': _safe_str(row.get('PJ_TYPE')),
+ 'PRODUCTLINENAME': _safe_str(row.get('PRODUCTLINENAME')),
+ 'WORKFLOW': _safe_str(row.get('WORKFLOW')),
+ 'FINISHEDRUNCARD': _safe_str(row.get('FINISHEDRUNCARD')),
+ 'TMTT_EQUIPMENTNAME': _safe_str(row.get('TMTT_EQUIPMENTNAME')),
+ 'INPUT_QTY': input_qty,
+ 'LOSS_REASON': reason,
+ 'DEFECT_QTY': qty,
+ 'DEFECT_RATE': rate,
+ 'ANCESTOR_COUNT': len(ancestor_set),
+ 'UPSTREAM_MACHINES': ', '.join(sorted(upstream_machines)),
+ })
+ else:
+ result.append({
+ 'CONTAINERNAME': _safe_str(row.get('CONTAINERNAME')),
+ 'PJ_TYPE': _safe_str(row.get('PJ_TYPE')),
+ 'PRODUCTLINENAME': _safe_str(row.get('PRODUCTLINENAME')),
+ 'WORKFLOW': _safe_str(row.get('WORKFLOW')),
+ 'FINISHEDRUNCARD': _safe_str(row.get('FINISHEDRUNCARD')),
+ 'TMTT_EQUIPMENTNAME': _safe_str(row.get('TMTT_EQUIPMENTNAME')),
+ 'INPUT_QTY': input_qty,
+ 'LOSS_REASON': '',
+ 'DEFECT_QTY': 0,
+ 'DEFECT_RATE': 0.0,
+ 'ANCESTOR_COUNT': len(ancestor_set),
+ 'UPSTREAM_MACHINES': ', '.join(sorted(upstream_machines)),
+ })
+
+ return result
diff --git a/src/mes_dashboard/sql/mid_section_defect/all_loss_reasons.sql b/src/mes_dashboard/sql/mid_section_defect/all_loss_reasons.sql
new file mode 100644
index 0000000..6a5da8e
--- /dev/null
+++ b/src/mes_dashboard/sql/mid_section_defect/all_loss_reasons.sql
@@ -0,0 +1,16 @@
+-- Mid-Section Defect - All Loss Reasons (cached daily)
+-- Lightweight query for filter dropdown population.
+-- Returns ALL loss reasons across all stations (not just TMTT).
+--
+-- Tables used:
+-- DWH.DW_MES_LOTREJECTHISTORY (TXNDATE indexed)
+--
+-- Performance:
+-- DISTINCT on one column with date filter only.
+-- Cached 24h in Redis.
+--
+SELECT DISTINCT r.LOSSREASONNAME
+FROM DWH.DW_MES_LOTREJECTHISTORY r
+WHERE r.TXNDATE >= SYSDATE - 180
+ AND r.LOSSREASONNAME IS NOT NULL
+ORDER BY r.LOSSREASONNAME
diff --git a/src/mes_dashboard/sql/mid_section_defect/genealogy_records.sql b/src/mes_dashboard/sql/mid_section_defect/genealogy_records.sql
new file mode 100644
index 0000000..bd878f2
--- /dev/null
+++ b/src/mes_dashboard/sql/mid_section_defect/genealogy_records.sql
@@ -0,0 +1,36 @@
+-- Mid-Section Defect Traceability - LOT Genealogy Records (Query 2)
+-- Batch query for split/merge records related to work orders
+--
+-- Parameters:
+-- MFG_ORDER_FILTER - Dynamic IN clause for MFGORDERNAME (built by QueryBuilder)
+--
+-- Tables used:
+-- DWH.DW_MES_CONTAINER (MFGORDERNAME indexed → get CONTAINERIDs)
+-- DWH.DW_MES_HM_LOTMOVEOUT (48M rows, no CONTAINERID index)
+--
+-- Performance:
+-- Full scan on HM_LOTMOVEOUT filtered by CONTAINERIDs from work orders.
+-- CDONAME filter reduces result set to only split/merge operations.
+-- Estimated 30-120s. Use aggressive caching (30-min TTL).
+--
+WITH work_order_lots AS (
+ SELECT CONTAINERID
+ FROM DWH.DW_MES_CONTAINER
+ WHERE {{ MFG_ORDER_FILTER }}
+)
+SELECT
+ h.CDONAME AS OPERATION_TYPE,
+ h.CONTAINERID AS TARGET_CID,
+ h.CONTAINERNAME AS TARGET_LOT,
+ h.FROMCONTAINERID AS SOURCE_CID,
+ h.FROMCONTAINERNAME AS SOURCE_LOT,
+ h.QTY,
+ h.TXNDATE
+FROM DWH.DW_MES_HM_LOTMOVEOUT h
+WHERE (
+ h.CONTAINERID IN (SELECT CONTAINERID FROM work_order_lots)
+ OR h.FROMCONTAINERID IN (SELECT CONTAINERID FROM work_order_lots)
+)
+ AND h.FROMCONTAINERID IS NOT NULL
+ AND (UPPER(h.CDONAME) LIKE '%SPLIT%' OR UPPER(h.CDONAME) LIKE '%COMBINE%')
+ORDER BY h.TXNDATE
diff --git a/src/mes_dashboard/sql/mid_section_defect/merge_lookup.sql b/src/mes_dashboard/sql/mid_section_defect/merge_lookup.sql
new file mode 100644
index 0000000..a643628
--- /dev/null
+++ b/src/mes_dashboard/sql/mid_section_defect/merge_lookup.sql
@@ -0,0 +1,21 @@
+-- Mid-Section Defect Traceability - Merge Lookup (Query 2b)
+-- Find source lots that were merged into finished lots
+-- via DW_MES_PJ_COMBINEDASSYLOTS
+--
+-- Parameters:
+-- Dynamically built IN clause for FINISHEDNAME values
+--
+-- Tables used:
+-- DWH.DW_MES_PJ_COMBINEDASSYLOTS (1.97M rows, FINISHEDNAME indexed)
+--
+-- Performance:
+-- FINISHEDNAME has index. Batch IN clause (up to 1000 per query).
+-- Each batch <1s.
+--
+SELECT
+ ca.CONTAINERID AS SOURCE_CID,
+ ca.CONTAINERNAME AS SOURCE_NAME,
+ ca.FINISHEDNAME,
+ ca.LOTID AS FINISHED_CID
+FROM DWH.DW_MES_PJ_COMBINEDASSYLOTS ca
+WHERE {{ FINISHED_NAME_FILTER }}
diff --git a/src/mes_dashboard/sql/mid_section_defect/split_chain.sql b/src/mes_dashboard/sql/mid_section_defect/split_chain.sql
new file mode 100644
index 0000000..6aecb20
--- /dev/null
+++ b/src/mes_dashboard/sql/mid_section_defect/split_chain.sql
@@ -0,0 +1,23 @@
+-- Mid-Section Defect Traceability - Split Chain (Query 2a)
+-- Resolve split ancestors via DW_MES_CONTAINER.SPLITFROMID
+--
+-- Parameters:
+-- Dynamically built IN clause for CONTAINERIDs
+--
+-- Tables used:
+-- DWH.DW_MES_CONTAINER (5.2M rows, CONTAINERID UNIQUE index)
+--
+-- Performance:
+-- CONTAINERID has UNIQUE index. Batch IN clause (up to 1000 per query).
+-- Each batch <1s.
+--
+-- Note: SPLITFROMID may be NULL for lots that were not split from another.
+-- BFS caller uses SPLITFROMID to walk upward; NULL means chain terminus.
+--
+SELECT
+ c.CONTAINERID,
+ c.SPLITFROMID,
+ c.ORIGINALCONTAINERID,
+ c.CONTAINERNAME
+FROM DWH.DW_MES_CONTAINER c
+WHERE {{ CID_FILTER }}
diff --git a/src/mes_dashboard/sql/mid_section_defect/tmtt_detection.sql b/src/mes_dashboard/sql/mid_section_defect/tmtt_detection.sql
new file mode 100644
index 0000000..ad47584
--- /dev/null
+++ b/src/mes_dashboard/sql/mid_section_defect/tmtt_detection.sql
@@ -0,0 +1,93 @@
+-- Mid-Section Defect Traceability - TMTT Detection Data (Query 1)
+-- Returns LOT-level data with TMTT input, ALL defects, and lot metadata
+--
+-- Parameters:
+-- :start_date - Start date (YYYY-MM-DD)
+-- :end_date - End date (YYYY-MM-DD)
+--
+-- Tables used:
+-- DWH.DW_MES_LOTWIPHISTORY (TMTT station records)
+-- DWH.DW_MES_LOTREJECTHISTORY (defect records - ALL loss reasons)
+-- DWH.DW_MES_CONTAINER (product info + MFGORDERNAME for genealogy)
+-- DWH.DW_MES_WIP (WORKFLOWNAME)
+--
+-- Changes from tmtt_defect/base_data.sql:
+-- 1. Removed hardcoded LOSSREASONNAME filter → fetches ALL loss reasons
+-- 2. Added MFGORDERNAME from DW_MES_CONTAINER (needed for genealogy batch)
+-- 3. Removed MOLD equipment lookup (upstream tracing done separately)
+-- 4. Kept existing dedup logic (ROW_NUMBER by CONTAINERID, latest TRACKINTIMESTAMP)
+
+WITH tmtt_records AS (
+ SELECT /*+ MATERIALIZE */
+ h.CONTAINERID,
+ h.EQUIPMENTID AS TMTT_EQUIPMENTID,
+ h.EQUIPMENTNAME AS TMTT_EQUIPMENTNAME,
+ h.TRACKINQTY,
+ h.TRACKINTIMESTAMP,
+ h.TRACKOUTTIMESTAMP,
+ h.FINISHEDRUNCARD,
+ h.SPECNAME,
+ h.WORKCENTERNAME,
+ ROW_NUMBER() OVER (
+ PARTITION BY h.CONTAINERID
+ ORDER BY h.TRACKINTIMESTAMP DESC, h.TRACKOUTTIMESTAMP DESC NULLS LAST
+ ) AS rn
+ FROM DWH.DW_MES_LOTWIPHISTORY h
+ WHERE h.TRACKINTIMESTAMP >= TO_DATE(:start_date, 'YYYY-MM-DD')
+ AND h.TRACKINTIMESTAMP < TO_DATE(:end_date, 'YYYY-MM-DD') + 1
+ AND (UPPER(h.WORKCENTERNAME) LIKE '%TMTT%' OR h.WORKCENTERNAME LIKE '%測試%')
+ AND h.EQUIPMENTID IS NOT NULL
+ AND h.TRACKINTIMESTAMP IS NOT NULL
+),
+tmtt_deduped AS (
+ SELECT * FROM tmtt_records WHERE rn = 1
+),
+tmtt_rejects AS (
+ SELECT /*+ MATERIALIZE */
+ r.CONTAINERID,
+ r.LOSSREASONNAME,
+ SUM(NVL(r.REJECTQTY, 0) + NVL(r.STANDBYQTY, 0) + NVL(r.QTYTOPROCESS, 0)
+ + NVL(r.INPROCESSQTY, 0) + NVL(r.PROCESSEDQTY, 0)) AS REJECTQTY
+ FROM DWH.DW_MES_LOTREJECTHISTORY r
+ WHERE r.TXNDATE >= TO_DATE(:start_date, 'YYYY-MM-DD')
+ AND r.TXNDATE < TO_DATE(:end_date, 'YYYY-MM-DD') + 1
+ AND (UPPER(r.WORKCENTERNAME) LIKE '%TMTT%' OR r.WORKCENTERNAME LIKE '%測試%')
+ GROUP BY r.CONTAINERID, r.LOSSREASONNAME
+),
+lot_metadata AS (
+ SELECT /*+ MATERIALIZE */
+ c.CONTAINERID,
+ c.CONTAINERNAME,
+ c.MFGORDERNAME,
+ c.PJ_TYPE,
+ c.PRODUCTLINENAME
+ FROM DWH.DW_MES_CONTAINER c
+ WHERE c.CONTAINERID IN (SELECT CONTAINERID FROM tmtt_deduped)
+),
+workflow_info AS (
+ SELECT /*+ MATERIALIZE */
+ DISTINCT w.CONTAINERID,
+ w.WORKFLOWNAME
+ FROM DWH.DW_MES_WIP w
+ WHERE w.CONTAINERID IN (SELECT CONTAINERID FROM tmtt_deduped)
+ AND w.PRODUCTLINENAME <> '點測'
+)
+SELECT
+ t.CONTAINERID,
+ m.CONTAINERNAME,
+ m.MFGORDERNAME,
+ m.PJ_TYPE,
+ m.PRODUCTLINENAME,
+ NVL(wf.WORKFLOWNAME, t.SPECNAME) AS WORKFLOW,
+ t.FINISHEDRUNCARD,
+ t.TMTT_EQUIPMENTID,
+ t.TMTT_EQUIPMENTNAME,
+ t.TRACKINQTY,
+ t.TRACKINTIMESTAMP,
+ r.LOSSREASONNAME,
+ NVL(r.REJECTQTY, 0) AS REJECTQTY
+FROM tmtt_deduped t
+LEFT JOIN lot_metadata m ON t.CONTAINERID = m.CONTAINERID
+LEFT JOIN workflow_info wf ON t.CONTAINERID = wf.CONTAINERID
+LEFT JOIN tmtt_rejects r ON t.CONTAINERID = r.CONTAINERID
+ORDER BY t.TRACKINTIMESTAMP
diff --git a/src/mes_dashboard/sql/mid_section_defect/upstream_history.sql b/src/mes_dashboard/sql/mid_section_defect/upstream_history.sql
new file mode 100644
index 0000000..f7f6b20
--- /dev/null
+++ b/src/mes_dashboard/sql/mid_section_defect/upstream_history.sql
@@ -0,0 +1,40 @@
+-- Mid-Section Defect Traceability - Upstream Production History (Query 3)
+-- Get production history for ancestor LOTs at all stations
+--
+-- Parameters:
+-- Dynamically built IN clause for ancestor CONTAINERIDs
+--
+-- Tables used:
+-- DWH.DW_MES_LOTWIPHISTORY (53M rows, CONTAINERID indexed → fast)
+--
+-- Performance:
+-- CONTAINERID has index. Batch IN clause (up to 1000 per query).
+-- Estimated 1-5s per batch.
+--
+WITH ranked_history AS (
+ SELECT
+ h.CONTAINERID,
+ h.WORKCENTERNAME,
+ h.EQUIPMENTID,
+ h.EQUIPMENTNAME,
+ h.SPECNAME,
+ h.TRACKINTIMESTAMP,
+ ROW_NUMBER() OVER (
+ PARTITION BY h.CONTAINERID, h.WORKCENTERNAME, h.EQUIPMENTNAME
+ ORDER BY h.TRACKINTIMESTAMP DESC
+ ) AS rn
+ FROM DWH.DW_MES_LOTWIPHISTORY h
+ WHERE {{ ANCESTOR_FILTER }}
+ AND h.EQUIPMENTID IS NOT NULL
+ AND h.TRACKINTIMESTAMP IS NOT NULL
+)
+SELECT
+ CONTAINERID,
+ WORKCENTERNAME,
+ EQUIPMENTID,
+ EQUIPMENTNAME,
+ SPECNAME,
+ TRACKINTIMESTAMP
+FROM ranked_history
+WHERE rn = 1
+ORDER BY CONTAINERID, TRACKINTIMESTAMP