# LLM Configuration Guide ## ๐Ÿ“– Overview The 5 Why Root Cause Analyzer now supports multiple LLM providers! You can configure and switch between different AI models through the admin panel. **Supported Providers:** - โœ… **DeepSeek** (Recommended) - High quality, cost-effective - โœ… **Ollama** - Self-hosted, privacy-focused - โœ… **OpenAI** - Industry standard - โœ… **Custom** - Any OpenAI-compatible API --- ## ๐Ÿš€ Quick Start: DeepSeek Setup ### Step 1: Get DeepSeek API Key 1. Go to [https://platform.deepseek.com/](https://platform.deepseek.com/) 2. Sign up for an account 3. Navigate to **API Keys** section 4. Create a new API key 5. Copy the key (you won't be able to see it again!) ### Step 2: Add Configuration via Admin Panel 1. Login as admin or super_admin 2. Go to **็ฎก็†่€…ๅ„€่กจๆฟ** (Admin Dashboard) 3. Click on **๐Ÿค– LLM ้…็ฝฎ** tab 4. Click **โž• ๆ–ฐๅขž้…็ฝฎ** button 5. Fill in the form: - **ๆไพ›ๅ•†**: Select "DeepSeek" - **API ็ซฏ้ปž**: `https://api.deepseek.com` - **API Key**: Paste your DeepSeek API key - **ๆจกๅž‹ๅ็จฑ**: `deepseek-chat` - **Temperature**: 0.7 (default) - **Max Tokens**: 6000 (default) - **Timeout**: 120 seconds (default) 6. Click **๐Ÿ” ๆธฌ่ฉฆ้€ฃ็ทš** to verify connection 7. Click **ๅ„ฒๅญ˜** to save 8. Click **ๅ•Ÿ็”จ** to activate this configuration ### Step 3: Start Using DeepSeek That's it! All 5 Why analyses will now use DeepSeek API. --- ## ๐Ÿ”ง Configuration via Script You can also add DeepSeek configuration using the command line: ### 1. Add API key to .env file: ```env DEEPSEEK_API_KEY=your-api-key-here DEEPSEEK_API_URL=https://api.deepseek.com DEEPSEEK_MODEL=deepseek-chat ``` ### 2. Run the setup script: ```bash npm run llm:add-deepseek ``` This will: - Add DeepSeek configuration to the database - Set it as the active LLM provider - Deactivate all other providers --- ## ๐ŸŽฏ Using Different LLM Providers ### DeepSeek (Recommended) **Pros:** - High quality responses - Cost-effective pricing - Fast response times - Excellent for Chinese language **Configuration:** ``` Provider: DeepSeek API Endpoint: https://api.deepseek.com Model: deepseek-chat API Key: Required ``` **Get API Key:** [https://platform.deepseek.com/](https://platform.deepseek.com/) --- ### Ollama (Self-Hosted) **Pros:** - Completely free - Privacy-focused (runs locally or on your server) - No API key required - No rate limits **Configuration:** ``` Provider: Ollama API Endpoint: https://ollama_pjapi.theaken.com Model: qwen2.5:3b API Key: Not required ``` **Setup:** [https://ollama.ai/](https://ollama.ai/) --- ### OpenAI **Pros:** - Industry standard - Most powerful models (GPT-4) - Excellent documentation - Multi-language support **Configuration:** ``` Provider: OpenAI API Endpoint: https://api.openai.com Model: gpt-4 or gpt-3.5-turbo API Key: Required ``` **Get API Key:** [https://platform.openai.com/](https://platform.openai.com/) --- ## โš™๏ธ Advanced Configuration ### Temperature Controls randomness in responses: - **0.0-0.3**: More focused and deterministic (good for technical analysis) - **0.4-0.7**: Balanced (recommended for 5 Why analysis) - **0.8-1.0**: More creative and varied - **1.0+**: Very creative (not recommended) **Recommended: 0.7** ### Max Tokens Maximum length of the response: - **2000**: Short responses - **4000-6000**: Standard (recommended for 5 Why) - **8000+**: Very detailed responses **Recommended: 6000** ### Timeout How long to wait for API response: - **60 seconds**: Fast but may timeout on complex analysis - **120 seconds**: Standard (recommended) - **180+ seconds**: For very complex analyses **Recommended: 120 seconds** --- ## ๐Ÿ”„ Switching Between Providers You can have multiple LLM configurations and switch between them: 1. Go to **Admin Dashboard** > **LLM ้…็ฝฎ** 2. View all configured providers 3. Click **ๅ•Ÿ็”จ** on any provider to activate it 4. Only one provider can be active at a time **Note:** You cannot delete the currently active provider. --- ## ๐Ÿงช Testing Configurations Before saving a configuration, you can test the connection: 1. Fill in all required fields in the modal 2. Click **๐Ÿ” ๆธฌ่ฉฆ้€ฃ็ทš** button 3. Wait for the test to complete 4. If successful, you'll see "โœ… ้€ฃ็ทšๆธฌ่ฉฆๆˆๅŠŸ๏ผ" 5. If failed, check your API endpoint and key --- ## ๐Ÿ“Š API Comparison | Feature | DeepSeek | Ollama | OpenAI | |---------|----------|--------|--------| | **Cost** | $0.14/M tokens | Free | $3-60/M tokens | | **Speed** | Fast | Medium | Fast | | **Quality** | High | Good | Excellent | | **Privacy** | Cloud | Private | Cloud | | **Chinese** | Excellent | Good | Good | | **API Key** | Required | No | Required | | **Best For** | Production | Development | Enterprise | --- ## ๐Ÿ› ๏ธ Troubleshooting ### "้€ฃ็ทšๆธฌ่ฉฆๅคฑๆ•—" **Possible causes:** 1. Invalid API key 2. Incorrect API endpoint 3. Network/firewall blocking request 4. API service is down 5. Rate limit exceeded **Solutions:** - Verify API key is correct - Check API endpoint URL (no trailing slash) - Test network connectivity: `curl https://api.deepseek.com` - Check provider's status page - Wait a few minutes if rate limited ### "Invalid response from API" **Possible causes:** 1. Model name is incorrect 2. API format has changed 3. Response timeout 4. API returned an error **Solutions:** - Verify model name (e.g., `deepseek-chat`, not `deepseek`) - Check provider's documentation - Increase timeout seconds - Check API logs for errors ### "Cannot delete active configuration" **This is expected behavior.** **Solution:** - Activate a different configuration first - Then delete the old one --- ## ๐Ÿ”’ Security Best Practices ### API Key Management 1. **Never commit API keys to git** - Use .env file (already in .gitignore) - Or use environment variables - Or add via admin panel only 2. **Rotate keys regularly** - Change API keys every 90 days - Immediately rotate if compromised 3. **Use separate keys for dev/prod** - Development: Use test/sandbox keys - Production: Use production keys with limits 4. **Monitor usage** - Set up billing alerts - Track API usage - Set rate limits ### Database Security API keys are stored in the database: - Ensure database has strong password - Use SSL/TLS for database connections - Regular backups - Restrict database access **Recommendation:** For production, encrypt API keys at rest using application-level encryption. --- ## ๐Ÿ“ API Endpoints ### Get All LLM Configs ``` GET /api/llm-config ``` Returns list of all LLM configurations (without API keys). ### Get Active Config ``` GET /api/llm-config/active ``` Returns the currently active LLM configuration. ### Create Config ``` POST /api/llm-config Body: { provider_name: string, api_endpoint: string, api_key: string (optional), model_name: string, temperature: number, max_tokens: number, timeout_seconds: number } ``` ### Update Config ``` PUT /api/llm-config/:id Body: { ...same as create } ``` ### Activate Config ``` PUT /api/llm-config/:id/activate ``` Deactivates all configs and activates the specified one. ### Delete Config ``` DELETE /api/llm-config/:id ``` Cannot delete active configuration. ### Test Config ``` POST /api/llm-config/test Body: { api_endpoint: string, api_key: string (optional), model_name: string } ``` --- ## ๐ŸŽ“ Example: Adding Custom Provider Let's add Azure OpenAI as a custom provider: 1. Go to Admin Panel > LLM ้…็ฝฎ 2. Click **โž• ๆ–ฐๅขž้…็ฝฎ** 3. Fill in: ``` Provider: Other API Endpoint: https://your-resource.openai.azure.com API Key: your-azure-api-key Model: gpt-35-turbo Temperature: 0.7 Max Tokens: 6000 Timeout: 120 ``` 4. Test connection 5. Save and activate **Note:** The API must be OpenAI-compatible (use `/v1/chat/completions` endpoint). --- ## ๐Ÿ†˜ Getting Help ### Official Documentation: - **DeepSeek**: [https://platform.deepseek.com/docs](https://platform.deepseek.com/docs) - **Ollama**: [https://ollama.ai/docs](https://ollama.ai/docs) - **OpenAI**: [https://platform.openai.com/docs](https://platform.openai.com/docs) ### Project Documentation: - [README.md](../README.md) - Project overview - [API_DOC.md](./API_DOC.md) - API documentation - [QUICKSTART.md](../QUICKSTART.md) - Getting started guide ### Repository: https://gitea.theaken.com/donald/5why-analyzer --- ## ๐ŸŽ‰ Success Checklist Your LLM configuration is working correctly when: - โœ… Test connection succeeds - โœ… Configuration is marked as "ๅ•Ÿ็”จไธญ" (Active) - โœ… 5 Why analysis creates results without errors - โœ… Analysis completes in reasonable time (<2 minutes) - โœ… Results are high quality and in correct language - โœ… No rate limit or quota errors --- **Version**: 1.0.0 **Last Updated**: 2025-12-06 **Feature**: Multi-LLM Support **Made with Claude Code** ๐Ÿค–