LLM Configuration
DeepSearcher supports various Large Language Models (LLMs) for processing queries and generating responses.
📝 Basic Configuration
config.set_provider_config("llm", "(LLMName)", "(Arguments dict)")
📋 Available LLM Providers
Provider | Description | Key Models |
---|---|---|
OpenAI | OpenAI's API for GPT models | o1-mini, GPT-4 |
DeepSeek | DeepSeek AI offering | deepseek-reasoner, coder |
Anthropic | Anthropic's Claude models | claude-sonnet-4-0 |
Gemini | Google's Gemini models | gemini-1.5-pro, gemini-2.0-flash |
XAI | X.AI's Grok models | grok-2-latest |
Ollama | Local LLM deployment | llama3, qwq, etc. |
SiliconFlow | Enterprise AI platform | deepseek-r1 |
TogetherAI | Multiple model options | llama-4, deepseek |
PPIO | Cloud AI infrastructure | deepseek, llama |
Volcengine | ByteDance LLM platform | deepseek-r1 |
GLM | ChatGLM models | glm-4-plus |
Bedrock | Amazon Bedrock LLMs | anthropic.claude, ai21.j2 |
Novita | Novita AI models | Various options |
IBM watsonx.ai | IBM Enterprise AI platform | Various options |
🔍 Provider Examples
OpenAI
config.set_provider_config("llm", "OpenAI", {"model": "o1-mini"})
OPENAI_API_KEY
environment variable
DeepSeek
config.set_provider_config("llm", "DeepSeek", {"model": "deepseek-reasoner"})
DEEPSEEK_API_KEY
environment variable
IBM WatsonX
config.set_provider_config("llm", "WatsonX", {"model": "ibm/granite-3-3-8b-instruct"})
WATSONX_APIKEY
, WATSONX_URL
, and WATSONX_PROJECT_ID
environment variables
📚 Additional Providers
DeepSeek from SiliconFlow
config.set_provider_config("llm", "SiliconFlow", {"model": "deepseek-ai/DeepSeek-R1"})
SILICONFLOW_API_KEY
environment variable
More details about SiliconFlow: https://docs.siliconflow.cn/quickstart
DeepSeek from TogetherAI
Requires TOGETHER_API_KEY
environment variable and pip install together
For DeepSeek R1:
config.set_provider_config("llm", "TogetherAI", {"model": "deepseek-ai/DeepSeek-R1"})
For Llama 4:
config.set_provider_config("llm", "TogetherAI", {"model": "meta-llama/Llama-4-Scout-17B-16E-Instruct"})
More details about TogetherAI: https://www.together.ai/
XAI Grok
config.set_provider_config("llm", "XAI", {"model": "grok-2-latest"})
XAI_API_KEY
environment variable
More details about XAI Grok: https://docs.x.ai/docs/overview#featured-models
Claude
config.set_provider_config("llm", "Anthropic", {"model": "claude-sonnet-4-0"})
ANTHROPIC_API_KEY
environment variable
More details about Anthropic Claude: https://docs.anthropic.com/en/home
Google Gemini
config.set_provider_config('llm', 'Gemini', { 'model': 'gemini-2.0-flash' })
GEMINI_API_KEY
environment variable and pip install google-genai
More details about Gemini: https://ai.google.dev/gemini-api/docs
DeepSeek from PPIO
config.set_provider_config("llm", "PPIO", {"model": "deepseek/deepseek-r1-turbo"})
PPIO_API_KEY
environment variable
More details about PPIO: https://ppinfra.com/docs/get-started/quickstart.html
Ollama
config.set_provider_config("llm", "Ollama", {"model": "qwq"})
Follow these instructions to set up and run a local Ollama instance:
- Download and install Ollama
- View available models via the model library
- Pull models with
ollama pull <name-of-model>
- By default, Ollama has a REST API on http://localhost:11434
Volcengine
config.set_provider_config("llm", "Volcengine", {"model": "deepseek-r1-250120"})
VOLCENGINE_API_KEY
environment variable
More details about Volcengine: https://www.volcengine.com/docs/82379/1099455
GLM
config.set_provider_config("llm", "GLM", {"model": "glm-4-plus"})
GLM_API_KEY
environment variable and pip install zhipuai
More details about GLM: https://bigmodel.cn/dev/welcome
Amazon Bedrock
config.set_provider_config("llm", "Bedrock", {"model": "us.deepseek.r1-v1:0"})
AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
environment variables and pip install boto3
More details about Amazon Bedrock: https://docs.aws.amazon.com/bedrock/
Aliyun Bailian
config.set_provider_config("llm", "OpenAI", {"model": "deepseek-r1", "base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1"})
OPENAI_API_KEY
environment variable
More details about Aliyun Bailian models: https://bailian.console.aliyun.com
IBM watsonx.ai LLM
config.set_provider_config("llm", "WatsonX", {"model": "ibm/granite-3-3-8b-instruct"})
With custom parameters:
config.set_provider_config("llm", "WatsonX", {
"model": "ibm/granite-3-3-8b-instruct",
"max_new_tokens": 1000,
"temperature": 0.7,
"top_p": 0.9,
"top_k": 50
})
With space_id instead of project_id:
config.set_provider_config("llm", "WatsonX", {
"model": "ibm/granite-3-3-8b-instruct""
})
Requires WATSONX_APIKEY
, WATSONX_URL
, and WATSONX_PROJECT_ID
environment variables and pip install ibm-watsonx-ai
More details about WatsonX: https://www.ibm.com/products/watsonx-ai/foundation-models
```