Critical Fix - Complete Solution:
Previous fix missed image_regions and tables fields, causing incorrect
scale factors when images or tables extended beyond text regions.
User's Scenario (multiple JSON files):
- text_regions: max coordinates ~1850
- image_regions: max coordinates ~2204 (beyond text!)
- tables: max coordinates ~3500 (beyond both!)
- Without checking all fields → scale=1.0 → content out of bounds
Complete Fix:
Now checks ALL possible bbox sources:
1. text_regions - text content
2. image_regions - images/figures/charts (NEW)
3. tables - table structures (NEW)
4. layout - legacy field
5. layout_data.elements - PP-StructureV3 format
Changes:
- backend/app/services/pdf_generator_service.py:
- Add image_regions check (critical for images at X=1434, X=2204)
- Add tables check (critical for tables at Y=3500)
- Add type checks for all fields for safety
- Update warning message to list all checked fields
- backend/test_all_regions.py:
- Test all region types are properly checked
- Validates max dimensions from ALL sources
- Confirms correct scale factors (~0.27, ~0.24)
Test Results:
✓ All 5 regions checked (text + image + table)
✓ OCR dimensions: 2204 x 3500 (from ALL regions)
✓ Scale factors: X=0.270, Y=0.241 (correct!)
This is the COMPLETE fix for the dimension inference bug.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Critical Fix for User-Reported Bug:
The function was only checking layout_data.elements but not the 'layout'
field or prioritizing 'text_regions', causing it to miss all bbox data
when layout=[] (empty list) even though text_regions contained valid data.
User's Scenario (ELER-8-100HFV Data Sheet):
- JSON structure: layout=[] (empty), text_regions=[...] (has data)
- Previous code only checked layout_data.elements
- Resulted in max_x=0, max_y=0
- Fell back to source file dimensions (595x842)
- Calculated scale=1.0 instead of ~0.3
- All text with X>595 rendered out of bounds
Root Cause Analysis:
1. Different OCR outputs use different field names
2. Some use 'layout', some use 'text_regions', some use 'layout_data.elements'
3. Previous code didn't check 'layout' field at all
4. Previous code checked layout_data.elements before text_regions
5. If both were empty/missing, fell back to source dims too early
Solution:
Check ALL possible bbox sources in order of priority:
1. text_regions - Most common, contains all text boxes
2. layout - Legacy field, may be empty list
3. layout_data.elements - PP-StructureV3 format
Only fall back to source file dimensions if ALL sources are empty.
Changes:
- backend/app/services/pdf_generator_service.py:
- Rewrite calculate_page_dimensions to check all three fields
- Use explicit extend() to combine all regions
- Add type checks (isinstance) for safety
- Update warning messages to be more specific
- backend/test_empty_layout.py:
- Add test for layout=[] + text_regions=[...] scenario
- Validates scale factors are correct (~0.3, not 1.0)
Test Results:
✓ OCR dimensions inferred from text_regions: 1850.0 x 2880.0
✓ Target PDF dimensions: 595.3 x 841.9
✓ Scale factors correct: X=0.322, Y=0.292 (NOT 1.0!)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Critical Fix:
The previous implementation incorrectly calculated scale factors because
calculate_page_dimensions() was prioritizing source file dimensions over
OCR coordinate analysis, resulting in scale=1.0 when it should have been ~0.27.
Root Cause:
- PaddleOCR processes PDFs at high resolution (e.g., 2185x3500 pixels)
- OCR bbox coordinates are in this high-res space
- calculate_page_dimensions() was returning source PDF size (595x842) instead
- This caused scale_w=1.0, scale_h=1.0, placing all text out of bounds
Solution:
1. Rewrite calculate_page_dimensions() to:
- Accept full ocr_data instead of just text_regions
- Process both text_regions AND layout elements
- Handle polygon bbox format [[x,y], ...] correctly
- Infer OCR dimensions from max bbox coordinates FIRST
- Only fallback to source file dimensions if inference fails
2. Separate OCR dimensions from target PDF dimensions:
- ocr_width/height: Inferred from bbox (e.g., 2185x3280)
- target_width/height: From source file (e.g., 595x842)
- scale_w = target_width / ocr_width (e.g., 0.272)
- scale_h = target_height / ocr_height (e.g., 0.257)
3. Add PyPDF2 support:
- Extract dimensions from source PDF files
- Required for getting target PDF size
Changes:
- backend/app/services/pdf_generator_service.py:
- Fix calculate_page_dimensions() to infer from bbox first
- Add PyPDF2 support in get_original_page_size()
- Simplify scaling logic (removed ocr_dimensions dependency)
- Update all drawing calls to use target_height instead of page_height
- requirements.txt:
- Add PyPDF2>=3.0.0 for PDF dimension extraction
- backend/test_bbox_scaling.py:
- Add comprehensive test for high-res OCR → A4 PDF scenario
- Validates proper scale factor calculation (0.272 x 0.257)
Test Results:
✓ OCR dimensions correctly inferred: 2185.0 x 3280.0
✓ Target PDF dimensions extracted: 595.3 x 841.9
✓ Scale factors correct: X=0.272, Y=0.257
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Problem:
- OCR processes images at smaller resolutions but coordinates were being used directly on larger PDF canvases
- This caused all text/tables/images to be drawn at wrong scale in bottom-left corner
Solution:
- Track OCR image dimensions in JSON output (ocr_dimensions)
- Calculate proper scale factors: scale_w = pdf_width/ocr_width, scale_h = pdf_height/ocr_height
- Apply scaling to all coordinates before drawing on PDF canvas
- Support per-page scaling for multi-page PDFs
Changes:
1. ocr_service.py:
- Add OCR image dimensions capture using PIL
- Include ocr_dimensions in JSON output for both single images and PDFs
2. pdf_generator_service.py:
- Calculate scale factors from OCR dimensions vs target PDF dimensions
- Update all drawing methods (text, table, image) to accept and apply scale factors
- Apply scaling to bbox coordinates before coordinate transformation
3. test_pdf_scaling.py:
- Add test script to verify scaling works correctly
- Test with OCR at 500x700 scaled to PDF at 1000x1400 (2x scaling)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major Features:
- Add PDF generation service with Chinese font support
- Parse HTML tables from PP-StructureV3 and rebuild with ReportLab
- Extract table text for translation purposes
- Auto-filter text regions inside tables to avoid overlaps
Backend Changes:
1. pdf_generator_service.py (NEW)
- HTMLTableParser: Parse HTML tables to extract structure
- PDFGeneratorService: Generate layout-preserving PDFs
- Coordinate transformation: OCR (top-left) → PDF (bottom-left)
- Font size heuristics: 75% of bbox height with width checking
- Table reconstruction: Parse HTML → ReportLab Table
- Image embedding: Extract bbox from filenames
2. ocr_service.py
- Add _extract_table_text() for translation support
- Add output_dir parameter to save images to result directory
- Extract bbox from image filenames (img_in_table_box_x1_y1_x2_y2.jpg)
3. tasks.py
- Update process_task_ocr to use save_results() with PDF generation
- Fix download_pdf endpoint to use database-stored PDF paths
- Support on-demand PDF generation from JSON
4. config.py
- Add chinese_font_path configuration
- Add pdf_enable_bbox_debug flag
Frontend Changes:
1. PDFViewer.tsx (NEW)
- React PDF viewer with zoom and pagination
- Memoized file config to prevent unnecessary reloads
2. TaskDetailPage.tsx & ResultsPage.tsx
- Integrate PDF preview and download
3. main.tsx
- Configure PDF.js worker via CDN
4. vite.config.ts
- Add host: '0.0.0.0' for network access
- Use VITE_API_URL environment variable for backend proxy
Dependencies:
- reportlab: PDF generation library
- Noto Sans SC font: Chinese character support
🤖 Generated with Claude Code
https://claude.com/claude-code
Co-Authored-By: Claude <noreply@anthropic.com>
Backend fixes:
- Fix markdown generation using correct 'markdown_content' key in tasks.py
- Update admin service to return flat data structure matching frontend types
- Add task_count and failed_tasks fields to user statistics
- Fix top users endpoint to return complete user data
Frontend fixes:
- Migrate ResultsPage from V1 batch API to V2 task API with polling
- Create TaskDetailPage component with markdown preview and download buttons
- Refactor ExportPage to support multi-task selection using V2 download endpoints
- Fix login infinite refresh loop with concurrency control flags
- Create missing Checkbox UI component
New features:
- Add /tasks/:taskId route for task detail view
- Implement multi-task batch export functionality
- Add real-time task status polling (2s interval)
OpenSpec:
- Archive completed proposal 2025-11-17-fix-v2-api-ui-issues
- Create result-export and task-management specifications
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fixed WSL CUDA library path in ~/.bashrc
- Upgraded PaddlePaddle from 3.0.0 to 3.2.1
- Verified fused_rms_norm_ext API is now available
- Enabled chart recognition in ocr_service.py
- Updated CHART_RECOGNITION.md to reflect enabled status
Chart recognition now supports:
✅ Chart type identification
✅ Data extraction from charts
✅ Axis and legend parsing
✅ Converting charts to structured data
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Remove all V1 architecture components and promote V2 to primary:
- Delete all paddle_ocr_* table models (export, ocr, translation, user)
- Delete legacy routers (auth, export, ocr, translation)
- Delete legacy schemas and services
- Promote user_v2.py to user.py as primary user model
- Update all imports and dependencies to use V2 models only
- Update main.py version to 2.0.0
Database changes:
- Fix SQLAlchemy reserved word: rename audit_log.metadata to extra_data
- Add migration to drop all paddle_ocr_* tables
- Update alembic env to only import V2 models
Frontend fixes:
- Fix Select component exports in TaskHistoryPage.tsx
- Update to use simplified Select API with options prop
- Fix AxiosInstance TypeScript import syntax
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
PaddleOCR-VL chart recognition model requires `fused_rms_norm_ext` API
which is not available in PaddlePaddle 3.0.0 stable release.
Changes:
- Set use_chart_recognition=False in PP-StructureV3 initialization
- Remove unsupported show_log parameter from PaddleOCR 3.x API calls
- Document known limitation in openspec proposal
- Add limitation documentation to README
- Update tasks.md with documentation task for known issues
Impact:
- Layout analysis still detects/extracts charts as images ✓
- Tables, formulas, and text recognition work normally ✓
- Deep chart understanding (type detection, data extraction) disabled ✗
- Chart to structured data conversion disabled ✗
Workaround: Charts saved as image files for manual review
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
PaddlePaddle 3.0.0b2 has "Illegal instruction" error on current CPU.
Downgrade to stable 2.6.2 which works but uses different API.
Changes:
- Auto-detect PaddlePaddle version at runtime
- Use 'device' parameter for 3.x (device="gpu:0" or "cpu")
- Use 'use_gpu' + 'gpu_mem' parameters for 2.x
- Apply to both get_ocr_engine() and get_structure_engine()
- Log PaddlePaddle version in initialization messages
Current setup:
- paddlepaddle-gpu==2.6.2 (stable, CUDA compiled)
- paddleocr==3.3.1
- paddlex==3.3.9
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
PaddleOCR 3.x changed the API:
- Removed: use_gpu=True/False and gpu_mem=<value>
- Added: device="gpu:0" or device="cpu"
Changes:
- Updated get_ocr_engine() to use device parameter
- Updated get_structure_engine() to use device parameter
- GPU mode: device="gpu:{gpu_device_id}"
- CPU mode: device="cpu"
This fixes the "ValueError: Unknown argument: gpu_mem" runtime error.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>