Reflexio Docs
Getting StartedConfiguration

LLM Configuration

Configure LLM provider API keys, model selection, Azure OpenAI, and custom endpoints for Reflexio.

LLM Configuration

Reflexio uses LiteLLM for multi-provider LLM support. You must configure an API key for at least one provider.

Enterprise

Reflexio Enterprise users can configure LLM provider API keys and model selection through the Settings page in the web portal under Advanced Settings.

API Keys

Method 1: Environment Variables (Recommended)

Set provider-specific environment variables in your .env file. LiteLLM picks them up automatically.

# .env — set one or more provider keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=...
DEEPSEEK_API_KEY=...

OpenAI is recommended as a starting point — it covers both text generation and the default embedding model (text-embedding-3-small) with a single key.

Method 2: Programmatic Configuration

Set API keys via the Config object. Programmatic keys take precedence over environment variables.

from reflexio.models.config_schema import APIKeyConfig, OpenAIConfig

config = client.get_config()
config.api_key_config = APIKeyConfig(
    openai=OpenAIConfig(api_key="sk-your-key-here")
)
client.set_config(config)

Supported Providers:

ProviderConfig ClassEnvironment Variable
OpenAIOpenAIConfigOPENAI_API_KEY
AnthropicAnthropicConfigANTHROPIC_API_KEY
Google GeminiGeminiConfigGEMINI_API_KEY
DeepSeekDeepSeekConfigDEEPSEEK_API_KEY
OpenRouterOpenRouterConfigOPENROUTER_API_KEY
MiniMaxMiniMaxConfigMINIMAX_API_KEY
DashScope (Qwen)DashScopeConfigDASHSCOPE_API_KEY
Zhipu AIZAIConfigZAI_API_KEY
MoonshotMoonshotConfigMOONSHOT_API_KEY
xAIXAIConfigXAI_API_KEY

Custom OpenAI-Compatible Endpoints

Use CustomEndpointConfig to connect to any OpenAI-compatible API. Custom endpoints take priority over other providers for text generation (but not embeddings).

from reflexio.models.config_schema import APIKeyConfig, CustomEndpointConfig

config.api_key_config = APIKeyConfig(
    custom_endpoint=CustomEndpointConfig(
        model="my-model",
        api_key="your-key",
        api_base="http://localhost:8000/v1"
    )
)
client.set_config(config)

Azure OpenAI

Use AzureOpenAIConfig nested inside OpenAIConfig to connect to Azure OpenAI:

from reflexio.models.config_schema import (
    APIKeyConfig, OpenAIConfig, AzureOpenAIConfig
)

config.api_key_config = APIKeyConfig(
    openai=OpenAIConfig(
        azure_config=AzureOpenAIConfig(
            api_key="your-azure-key",
            endpoint="https://your-resource.openai.azure.com/",
            api_version="2024-02-15-preview",
            deployment_name="gpt-4o"
        )
    )
)
client.set_config(config)

Model Selection

Reflexio uses different models for different tasks. Sensible defaults are provided, but you can override them via LLMConfig. Only set fields you want to override — None fields keep the defaults.

FieldDefaultPurpose
should_run_model_nameminimax/MiniMax-M2.5Fast check to decide if extraction should run on a given interaction
generation_model_nameminimax/MiniMax-M2.5Profile extraction, playbook generation, and evaluation
embedding_model_nametext-embedding-3-smallVector embeddings for semantic search
pre_retrieval_model_nameminimax/MiniMax-M2.5Model for pre-retrieval query reformulation
from reflexio.models.config_schema import LLMConfig

config = client.get_config()
config.llm_config = LLMConfig(
    generation_model_name="openai/gpt-4o",
    embedding_model_name="text-embedding-3-small",
)
client.set_config(config)

Model names use LiteLLM's "provider/model-name" format (e.g., "openai/gpt-4o", "anthropic/claude-3-5-sonnet", "deepseek/deepseek-chat").