feat: Add multi-LLM provider support with DeepSeek integration
Major Features: - ✨ Multi-LLM provider support (DeepSeek, Ollama, OpenAI, Custom) - 🤖 Admin panel LLM configuration management UI - 🔄 Dynamic provider switching without restart - 🧪 Built-in API connection testing - 🔒 Secure API key management Backend Changes: - Add routes/llmConfig.js: Complete LLM config CRUD API - Update routes/analyze.js: Use database LLM configuration - Update server.js: Add LLM config routes - Add scripts/add-deepseek-config.js: DeepSeek setup script Frontend Changes: - Update src/pages/AdminPage.jsx: Add LLM Config tab + modal - Update src/services/api.js: Add LLM config API methods - Provider presets for DeepSeek, Ollama, OpenAI - Test connection feature in config modal Configuration: - Update .env.example: Add DeepSeek API configuration - Update package.json: Add llm:add-deepseek script Documentation: - Add docs/LLM_CONFIGURATION_GUIDE.md: Complete guide - Add DEEPSEEK_INTEGRATION.md: Integration summary - Quick setup instructions for DeepSeek API Endpoints: - GET /api/llm-config: List all configurations - GET /api/llm-config/active: Get active configuration - POST /api/llm-config: Create configuration - PUT /api/llm-config/🆔 Update configuration - PUT /api/llm-config/:id/activate: Activate configuration - DELETE /api/llm-config/🆔 Delete configuration - POST /api/llm-config/test: Test API connection Database: - Uses existing llm_configs table - Only one config active at a time - Fallback to Ollama if no database config Security: - Admin-only access to LLM configuration - API keys never returned in GET requests - Audit logging for all config changes - Cannot delete active configuration DeepSeek Model: - Model: deepseek-chat - High-quality 5 Why analysis - Excellent Chinese language support - Cost-effective pricing 🤖 Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
398
docs/LLM_CONFIGURATION_GUIDE.md
Normal file
398
docs/LLM_CONFIGURATION_GUIDE.md
Normal file
@@ -0,0 +1,398 @@
|
||||
# LLM Configuration Guide
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
The 5 Why Root Cause Analyzer now supports multiple LLM providers! You can configure and switch between different AI models through the admin panel.
|
||||
|
||||
**Supported Providers:**
|
||||
- ✅ **DeepSeek** (Recommended) - High quality, cost-effective
|
||||
- ✅ **Ollama** - Self-hosted, privacy-focused
|
||||
- ✅ **OpenAI** - Industry standard
|
||||
- ✅ **Custom** - Any OpenAI-compatible API
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start: DeepSeek Setup
|
||||
|
||||
### Step 1: Get DeepSeek API Key
|
||||
|
||||
1. Go to [https://platform.deepseek.com/](https://platform.deepseek.com/)
|
||||
2. Sign up for an account
|
||||
3. Navigate to **API Keys** section
|
||||
4. Create a new API key
|
||||
5. Copy the key (you won't be able to see it again!)
|
||||
|
||||
### Step 2: Add Configuration via Admin Panel
|
||||
|
||||
1. Login as admin or super_admin
|
||||
2. Go to **管理者儀表板** (Admin Dashboard)
|
||||
3. Click on **🤖 LLM 配置** tab
|
||||
4. Click **➕ 新增配置** button
|
||||
5. Fill in the form:
|
||||
- **提供商**: Select "DeepSeek"
|
||||
- **API 端點**: `https://api.deepseek.com`
|
||||
- **API Key**: Paste your DeepSeek API key
|
||||
- **模型名稱**: `deepseek-chat`
|
||||
- **Temperature**: 0.7 (default)
|
||||
- **Max Tokens**: 6000 (default)
|
||||
- **Timeout**: 120 seconds (default)
|
||||
6. Click **🔍 測試連線** to verify connection
|
||||
7. Click **儲存** to save
|
||||
8. Click **啟用** to activate this configuration
|
||||
|
||||
### Step 3: Start Using DeepSeek
|
||||
|
||||
That's it! All 5 Why analyses will now use DeepSeek API.
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Configuration via Script
|
||||
|
||||
You can also add DeepSeek configuration using the command line:
|
||||
|
||||
### 1. Add API key to .env file:
|
||||
|
||||
```env
|
||||
DEEPSEEK_API_KEY=your-api-key-here
|
||||
DEEPSEEK_API_URL=https://api.deepseek.com
|
||||
DEEPSEEK_MODEL=deepseek-chat
|
||||
```
|
||||
|
||||
### 2. Run the setup script:
|
||||
|
||||
```bash
|
||||
npm run llm:add-deepseek
|
||||
```
|
||||
|
||||
This will:
|
||||
- Add DeepSeek configuration to the database
|
||||
- Set it as the active LLM provider
|
||||
- Deactivate all other providers
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Using Different LLM Providers
|
||||
|
||||
### DeepSeek (Recommended)
|
||||
|
||||
**Pros:**
|
||||
- High quality responses
|
||||
- Cost-effective pricing
|
||||
- Fast response times
|
||||
- Excellent for Chinese language
|
||||
|
||||
**Configuration:**
|
||||
```
|
||||
Provider: DeepSeek
|
||||
API Endpoint: https://api.deepseek.com
|
||||
Model: deepseek-chat
|
||||
API Key: Required
|
||||
```
|
||||
|
||||
**Get API Key:** [https://platform.deepseek.com/](https://platform.deepseek.com/)
|
||||
|
||||
---
|
||||
|
||||
### Ollama (Self-Hosted)
|
||||
|
||||
**Pros:**
|
||||
- Completely free
|
||||
- Privacy-focused (runs locally or on your server)
|
||||
- No API key required
|
||||
- No rate limits
|
||||
|
||||
**Configuration:**
|
||||
```
|
||||
Provider: Ollama
|
||||
API Endpoint: https://ollama_pjapi.theaken.com
|
||||
Model: qwen2.5:3b
|
||||
API Key: Not required
|
||||
```
|
||||
|
||||
**Setup:** [https://ollama.ai/](https://ollama.ai/)
|
||||
|
||||
---
|
||||
|
||||
### OpenAI
|
||||
|
||||
**Pros:**
|
||||
- Industry standard
|
||||
- Most powerful models (GPT-4)
|
||||
- Excellent documentation
|
||||
- Multi-language support
|
||||
|
||||
**Configuration:**
|
||||
```
|
||||
Provider: OpenAI
|
||||
API Endpoint: https://api.openai.com
|
||||
Model: gpt-4 or gpt-3.5-turbo
|
||||
API Key: Required
|
||||
```
|
||||
|
||||
**Get API Key:** [https://platform.openai.com/](https://platform.openai.com/)
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ Advanced Configuration
|
||||
|
||||
### Temperature
|
||||
|
||||
Controls randomness in responses:
|
||||
- **0.0-0.3**: More focused and deterministic (good for technical analysis)
|
||||
- **0.4-0.7**: Balanced (recommended for 5 Why analysis)
|
||||
- **0.8-1.0**: More creative and varied
|
||||
- **1.0+**: Very creative (not recommended)
|
||||
|
||||
**Recommended: 0.7**
|
||||
|
||||
### Max Tokens
|
||||
|
||||
Maximum length of the response:
|
||||
- **2000**: Short responses
|
||||
- **4000-6000**: Standard (recommended for 5 Why)
|
||||
- **8000+**: Very detailed responses
|
||||
|
||||
**Recommended: 6000**
|
||||
|
||||
### Timeout
|
||||
|
||||
How long to wait for API response:
|
||||
- **60 seconds**: Fast but may timeout on complex analysis
|
||||
- **120 seconds**: Standard (recommended)
|
||||
- **180+ seconds**: For very complex analyses
|
||||
|
||||
**Recommended: 120 seconds**
|
||||
|
||||
---
|
||||
|
||||
## 🔄 Switching Between Providers
|
||||
|
||||
You can have multiple LLM configurations and switch between them:
|
||||
|
||||
1. Go to **Admin Dashboard** > **LLM 配置**
|
||||
2. View all configured providers
|
||||
3. Click **啟用** on any provider to activate it
|
||||
4. Only one provider can be active at a time
|
||||
|
||||
**Note:** You cannot delete the currently active provider.
|
||||
|
||||
---
|
||||
|
||||
## 🧪 Testing Configurations
|
||||
|
||||
Before saving a configuration, you can test the connection:
|
||||
|
||||
1. Fill in all required fields in the modal
|
||||
2. Click **🔍 測試連線** button
|
||||
3. Wait for the test to complete
|
||||
4. If successful, you'll see "✅ 連線測試成功!"
|
||||
5. If failed, check your API endpoint and key
|
||||
|
||||
---
|
||||
|
||||
## 📊 API Comparison
|
||||
|
||||
| Feature | DeepSeek | Ollama | OpenAI |
|
||||
|---------|----------|--------|--------|
|
||||
| **Cost** | $0.14/M tokens | Free | $3-60/M tokens |
|
||||
| **Speed** | Fast | Medium | Fast |
|
||||
| **Quality** | High | Good | Excellent |
|
||||
| **Privacy** | Cloud | Private | Cloud |
|
||||
| **Chinese** | Excellent | Good | Good |
|
||||
| **API Key** | Required | No | Required |
|
||||
| **Best For** | Production | Development | Enterprise |
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ Troubleshooting
|
||||
|
||||
### "連線測試失敗"
|
||||
|
||||
**Possible causes:**
|
||||
1. Invalid API key
|
||||
2. Incorrect API endpoint
|
||||
3. Network/firewall blocking request
|
||||
4. API service is down
|
||||
5. Rate limit exceeded
|
||||
|
||||
**Solutions:**
|
||||
- Verify API key is correct
|
||||
- Check API endpoint URL (no trailing slash)
|
||||
- Test network connectivity: `curl https://api.deepseek.com`
|
||||
- Check provider's status page
|
||||
- Wait a few minutes if rate limited
|
||||
|
||||
### "Invalid response from API"
|
||||
|
||||
**Possible causes:**
|
||||
1. Model name is incorrect
|
||||
2. API format has changed
|
||||
3. Response timeout
|
||||
4. API returned an error
|
||||
|
||||
**Solutions:**
|
||||
- Verify model name (e.g., `deepseek-chat`, not `deepseek`)
|
||||
- Check provider's documentation
|
||||
- Increase timeout seconds
|
||||
- Check API logs for errors
|
||||
|
||||
### "Cannot delete active configuration"
|
||||
|
||||
**This is expected behavior.**
|
||||
|
||||
**Solution:**
|
||||
- Activate a different configuration first
|
||||
- Then delete the old one
|
||||
|
||||
---
|
||||
|
||||
## 🔒 Security Best Practices
|
||||
|
||||
### API Key Management
|
||||
|
||||
1. **Never commit API keys to git**
|
||||
- Use .env file (already in .gitignore)
|
||||
- Or use environment variables
|
||||
- Or add via admin panel only
|
||||
|
||||
2. **Rotate keys regularly**
|
||||
- Change API keys every 90 days
|
||||
- Immediately rotate if compromised
|
||||
|
||||
3. **Use separate keys for dev/prod**
|
||||
- Development: Use test/sandbox keys
|
||||
- Production: Use production keys with limits
|
||||
|
||||
4. **Monitor usage**
|
||||
- Set up billing alerts
|
||||
- Track API usage
|
||||
- Set rate limits
|
||||
|
||||
### Database Security
|
||||
|
||||
API keys are stored in the database:
|
||||
- Ensure database has strong password
|
||||
- Use SSL/TLS for database connections
|
||||
- Regular backups
|
||||
- Restrict database access
|
||||
|
||||
**Recommendation:** For production, encrypt API keys at rest using application-level encryption.
|
||||
|
||||
---
|
||||
|
||||
## 📝 API Endpoints
|
||||
|
||||
### Get All LLM Configs
|
||||
```
|
||||
GET /api/llm-config
|
||||
```
|
||||
Returns list of all LLM configurations (without API keys).
|
||||
|
||||
### Get Active Config
|
||||
```
|
||||
GET /api/llm-config/active
|
||||
```
|
||||
Returns the currently active LLM configuration.
|
||||
|
||||
### Create Config
|
||||
```
|
||||
POST /api/llm-config
|
||||
Body: {
|
||||
provider_name: string,
|
||||
api_endpoint: string,
|
||||
api_key: string (optional),
|
||||
model_name: string,
|
||||
temperature: number,
|
||||
max_tokens: number,
|
||||
timeout_seconds: number
|
||||
}
|
||||
```
|
||||
|
||||
### Update Config
|
||||
```
|
||||
PUT /api/llm-config/:id
|
||||
Body: { ...same as create }
|
||||
```
|
||||
|
||||
### Activate Config
|
||||
```
|
||||
PUT /api/llm-config/:id/activate
|
||||
```
|
||||
Deactivates all configs and activates the specified one.
|
||||
|
||||
### Delete Config
|
||||
```
|
||||
DELETE /api/llm-config/:id
|
||||
```
|
||||
Cannot delete active configuration.
|
||||
|
||||
### Test Config
|
||||
```
|
||||
POST /api/llm-config/test
|
||||
Body: {
|
||||
api_endpoint: string,
|
||||
api_key: string (optional),
|
||||
model_name: string
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎓 Example: Adding Custom Provider
|
||||
|
||||
Let's add Azure OpenAI as a custom provider:
|
||||
|
||||
1. Go to Admin Panel > LLM 配置
|
||||
2. Click **➕ 新增配置**
|
||||
3. Fill in:
|
||||
```
|
||||
Provider: Other
|
||||
API Endpoint: https://your-resource.openai.azure.com
|
||||
API Key: your-azure-api-key
|
||||
Model: gpt-35-turbo
|
||||
Temperature: 0.7
|
||||
Max Tokens: 6000
|
||||
Timeout: 120
|
||||
```
|
||||
4. Test connection
|
||||
5. Save and activate
|
||||
|
||||
**Note:** The API must be OpenAI-compatible (use `/v1/chat/completions` endpoint).
|
||||
|
||||
---
|
||||
|
||||
## 🆘 Getting Help
|
||||
|
||||
### Official Documentation:
|
||||
- **DeepSeek**: [https://platform.deepseek.com/docs](https://platform.deepseek.com/docs)
|
||||
- **Ollama**: [https://ollama.ai/docs](https://ollama.ai/docs)
|
||||
- **OpenAI**: [https://platform.openai.com/docs](https://platform.openai.com/docs)
|
||||
|
||||
### Project Documentation:
|
||||
- [README.md](../README.md) - Project overview
|
||||
- [API_DOC.md](./API_DOC.md) - API documentation
|
||||
- [QUICKSTART.md](../QUICKSTART.md) - Getting started guide
|
||||
|
||||
### Repository:
|
||||
https://gitea.theaken.com/donald/5why-analyzer
|
||||
|
||||
---
|
||||
|
||||
## 🎉 Success Checklist
|
||||
|
||||
Your LLM configuration is working correctly when:
|
||||
|
||||
- ✅ Test connection succeeds
|
||||
- ✅ Configuration is marked as "啟用中" (Active)
|
||||
- ✅ 5 Why analysis creates results without errors
|
||||
- ✅ Analysis completes in reasonable time (<2 minutes)
|
||||
- ✅ Results are high quality and in correct language
|
||||
- ✅ No rate limit or quota errors
|
||||
|
||||
---
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Last Updated**: 2025-12-06
|
||||
**Feature**: Multi-LLM Support
|
||||
|
||||
**Made with Claude Code** 🤖
|
||||
Reference in New Issue
Block a user