Reflexio Docs
Getting Started

Quick Start

Install the Reflexio client and get it working end-to-end with your LLM agent in minutes.

Quick Start

Install

Pick the package that matches how you want to run Reflexio:

Lightweight client that talks to the hosted cloud at https://www.reflexio.ai. No local server required.

pip install reflexio-client

Or with uv:

uv add reflexio-client

Full package containing the client, the FastAPI server, and the reflexio CLI. Use this when you want to run the entire Reflexio stack on your own machine.

pip install reflexio-ai

Or with uv:

uv add reflexio-ai

Both packages expose the same from reflexio import ReflexioClient import.

Connect to Reflexio

Set your API key as an environment variable, then create the client:

export REFLEXIO_API_KEY=your-api-key
from reflexio import ReflexioClient

client = ReflexioClient()

Clone the Reflexio repository and start the server:

git clone https://github.com/ReflexioAI/reflexio.git
cd reflexio

Then point the client at your running server:

from reflexio import ReflexioClient

client = ReflexioClient(url_endpoint="http://localhost:8081")

Or set it via environment variable:

export REFLEXIO_API_URL=http://localhost:8081

Set Up Your LLM Provider

Reflexio uses an LLM for profile extraction, playbook generation, and semantic search. You need an API key for at least one provider.

The simplest way is to set an environment variable in your .env file:

# .env file in your Reflexio server directory
OPENAI_API_KEY=sk-your-key-here

OpenAI is recommended to get started — a single key covers both text generation and the default embedding model (text-embedding-3-small). Other supported providers include Anthropic, Google Gemini, DeepSeek, and more. See the Configuration Guide for the full provider list and advanced model selection.

Configure

Configure Reflexio either programmatically (works for both deployments) or through the web portal (managed only).

  • Programmatic — call client.set_config(...) from Python. Works for both deployments; the web portal is managed-only. See the Configuration Guide or API Reference.

Enterprise

Web Portal (managed only) — go to Settings in your Reflexio dashboard to configure profile extractors and playbook settings visually.

End-to-End Example: Agent with Memory

This example shows the complete workflow: retrieving user context, building an LLM prompt, and saving the conversation.

from reflexio import ReflexioClient, InteractionData, UserActionType

# Step 1: Connect
client = ReflexioClient()  # uses REFLEXIO_API_KEY env var
# Self-hosted: client = ReflexioClient(url_endpoint="http://localhost:8081")

# Step 2: Retrieve user context
user_id = "user_123"

# Get user profiles (semantic search)
profiles_response = client.search_profiles(
    user_id=user_id,
    query="preferences interests",
    top_k=5
)

# Get agent playbooks (guidelines from past interactions)
playbooks_response = client.get_agent_playbooks(limit=5)

# Step 3: Build LLM prompt with context
def build_prompt_with_memory(user_profiles, agent_playbooks, user_message):
    profile_context = "\n".join([
        f"- {p.content}"
        for p in user_profiles
    ])

    playbook_context = "\n".join([
        f"- {pb.content}"
        for pb in agent_playbooks
    ])

    return f"""You are a helpful assistant.

## What you know about this user:
{profile_context if profile_context else "No profile information yet."}

## Guidelines from past interactions:
{playbook_context if playbook_context else "No playbook entries yet."}

## User message:
{user_message}
"""

user_message = "What products would you recommend for me?"
prompt = build_prompt_with_memory(
    profiles_response.user_profiles,
    playbooks_response.agent_playbooks,
    user_message
)

# Step 4: Call your LLM (example with OpenAI)
# response = openai.chat.completions.create(
#     model="gpt-5",
#     messages=[{"role": "user", "content": prompt}]
# )
# agent_response = response.choices[0].message.content
agent_response = "Based on your preferences, I recommend..."  # placeholder

# Step 5: Publish the interaction (fire-and-forget by default)
client.publish_interaction(
    user_id=user_id,
    interactions=[
        InteractionData(
            role="User",
            content=user_message,
            user_action=UserActionType.NONE
        ),
        InteractionData(
            role="Agent",
            content=agent_response,
            user_action=UserActionType.NONE
        )
    ],
    source="chat",
    session_id="session_001"
)

print("Interaction published! Profiles will be extracted automatically.")
# Self-hosted: replace https://www.reflexio.ai with http://localhost:8081
#              and remove the Authorization header

# Step 1: Search user profiles
curl -X POST "https://www.reflexio.ai/api/search_profiles" \
  -H "Authorization: Bearer $REFLEXIO_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "user_123",
    "query": "preferences interests",
    "top_k": 5
  }'

# Step 2: Get agent playbooks
curl -X POST "https://www.reflexio.ai/api/get_agent_playbooks" \
  -H "Authorization: Bearer $REFLEXIO_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"limit": 5}'

# Step 3 & 4: Build prompt and call your LLM (done in your application code)

# Step 5: Publish the interaction
curl -X POST "https://www.reflexio.ai/api/publish_interaction" \
  -H "Authorization: Bearer $REFLEXIO_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "user_id": "user_123",
    "interactions": [
      {
        "role": "User",
        "content": "What products would you recommend for me?",
        "user_action": "none"
      },
      {
        "role": "Agent",
        "content": "Based on your preferences, I recommend...",
        "user_action": "none"
      }
    ],
    "source": "chat",
    "session_id": "session_001"
  }'

Key Concepts

  • Fire-and-forget: publish_interaction returns immediately by default. Set wait_for_response=True to wait for processing.
  • Sessions: Group related interactions (e.g., by session) for tracking and attribution.
  • Sources: Categorize interactions (e.g., "chat", "email") for filtering during searches.
  • Semantic search: Use natural language queries like "user preferences" rather than exact keywords.

Next Steps