feat: Extract hardcoded configs to environment variables

- Add environment variable configuration for backend and frontend
- Backend: DB_POOL_SIZE, JWT_EXPIRE_HOURS, timeout configs, directory paths
- Frontend: VITE_API_BASE_URL, VITE_UPLOAD_TIMEOUT, Whisper configs
- Create deployment script (scripts/deploy-backend.sh)
- Create 1Panel deployment guide (docs/1panel-deployment.md)
- Update DEPLOYMENT.md with env var documentation
- Create README.md with project overview
- Remove obsolete PRD.md, SDD.md, TDD.md (replaced by OpenSpec)
- Keep CORS allow_origins=["*"] for Electron EXE distribution

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
egg
2025-12-14 14:31:55 +08:00
parent 43c413c5ce
commit 01aee1fd0d
19 changed files with 1460 additions and 311 deletions

39
client/.env.example Normal file
View File

@@ -0,0 +1,39 @@
# =============================================================================
# Meeting Assistant Frontend Configuration
# Copy this file to .env and fill in your values
# =============================================================================
# -----------------------------------------------------------------------------
# API Configuration (Vite build-time variables)
# -----------------------------------------------------------------------------
# Backend API base URL
# For development: http://localhost:8000/api
# For production: http://<server-ip>:<port>/api or https://api.example.com/api
VITE_API_BASE_URL=http://localhost:8000/api
# Upload timeout in milliseconds (default: 600000 = 10 minutes)
VITE_UPLOAD_TIMEOUT=600000
# Application title (shown in window title)
VITE_APP_TITLE=Meeting Assistant
# -----------------------------------------------------------------------------
# Sidecar/Whisper Configuration (Electron runtime variables)
# -----------------------------------------------------------------------------
# These environment variables are read by Electron main process at runtime
# and passed to the Sidecar (Whisper transcription service)
# Whisper model size
# Options: tiny, base, small, medium, large
# Larger models are more accurate but require more memory and are slower
WHISPER_MODEL=medium
# Execution device
# Options: cpu, cuda
# Use "cuda" if you have an NVIDIA GPU with CUDA support
WHISPER_DEVICE=cpu
# Compute precision
# Options: int8, float16, float32
# int8 is fastest but less accurate, float32 is most accurate but slowest
WHISPER_COMPUTE=int8