LLM Configuration
Configure LLM provider API keys, model selection, Azure OpenAI, and custom endpoints for Reflexio.
LLM Configuration
Reflexio uses LiteLLM for multi-provider LLM support. You must configure an API key for at least one provider.
Enterprise
Reflexio Enterprise users can configure LLM provider API keys and model selection through the Settings page in the web portal under Advanced Settings.
API Keys
Method 1: Environment Variables (Recommended)
Set provider-specific environment variables in your .env file. LiteLLM picks them up automatically.
# .env — set one or more provider keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=...
DEEPSEEK_API_KEY=...OpenAI is recommended as a starting point — it covers both text generation and the default embedding model (text-embedding-3-small) with a single key.
Method 2: Programmatic Configuration
Set API keys via the Config object. Programmatic keys take precedence over environment variables.
from reflexio.models.config_schema import APIKeyConfig, OpenAIConfig
config = client.get_config()
config.api_key_config = APIKeyConfig(
openai=OpenAIConfig(api_key="sk-your-key-here")
)
client.set_config(config)Supported Providers:
| Provider | Config Class | Environment Variable |
|---|---|---|
| OpenAI | OpenAIConfig | OPENAI_API_KEY |
| Anthropic | AnthropicConfig | ANTHROPIC_API_KEY |
| Google Gemini | GeminiConfig | GEMINI_API_KEY |
| DeepSeek | DeepSeekConfig | DEEPSEEK_API_KEY |
| OpenRouter | OpenRouterConfig | OPENROUTER_API_KEY |
| MiniMax | MiniMaxConfig | MINIMAX_API_KEY |
| DashScope (Qwen) | DashScopeConfig | DASHSCOPE_API_KEY |
| Zhipu AI | ZAIConfig | ZAI_API_KEY |
| Moonshot | MoonshotConfig | MOONSHOT_API_KEY |
| xAI | XAIConfig | XAI_API_KEY |
Custom OpenAI-Compatible Endpoints
Use CustomEndpointConfig to connect to any OpenAI-compatible API. Custom endpoints take priority over other providers for text generation (but not embeddings).
from reflexio.models.config_schema import APIKeyConfig, CustomEndpointConfig
config.api_key_config = APIKeyConfig(
custom_endpoint=CustomEndpointConfig(
model="my-model",
api_key="your-key",
api_base="http://localhost:8000/v1"
)
)
client.set_config(config)Azure OpenAI
Use AzureOpenAIConfig nested inside OpenAIConfig to connect to Azure OpenAI:
from reflexio.models.config_schema import (
APIKeyConfig, OpenAIConfig, AzureOpenAIConfig
)
config.api_key_config = APIKeyConfig(
openai=OpenAIConfig(
azure_config=AzureOpenAIConfig(
api_key="your-azure-key",
endpoint="https://your-resource.openai.azure.com/",
api_version="2024-02-15-preview",
deployment_name="gpt-4o"
)
)
)
client.set_config(config)Model Selection
Reflexio uses different models for different tasks. Sensible defaults are provided, but you can override them via LLMConfig. Only set fields you want to override — None fields keep the defaults.
| Field | Default | Purpose |
|---|---|---|
should_run_model_name | minimax/MiniMax-M2.5 | Fast check to decide if extraction should run on a given interaction |
generation_model_name | minimax/MiniMax-M2.5 | Profile extraction, playbook generation, and evaluation |
embedding_model_name | text-embedding-3-small | Vector embeddings for semantic search |
pre_retrieval_model_name | minimax/MiniMax-M2.5 | Model for pre-retrieval query reformulation |
from reflexio.models.config_schema import LLMConfig
config = client.get_config()
config.llm_config = LLMConfig(
generation_model_name="openai/gpt-4o",
embedding_model_name="text-embedding-3-small",
)
client.set_config(config)Model names use LiteLLM's "provider/model-name" format (e.g., "openai/gpt-4o", "anthropic/claude-3-5-sonnet", "deepseek/deepseek-chat").