Configuration
Methods for getting and setting system configuration.
Configuration Management
get_config
Get the current system configuration.
config = client.get_config()Returns: Config — every nested type is linked in the accordion below.
Example:
config = client.get_config()
print(f"Profile extractors: {len(config.profile_extractor_configs)}")
print(f"User playbook extractors: {len(config.user_playbook_extractor_configs)}")
# Inspect profile extractors
for extractor in config.profile_extractor_configs:
print(f" Extractor: {extractor.extraction_definition_prompt[:50]}...")set_config
Set the system configuration. Accepts a full Config object or a partial dict — the server fills in defaults and auto-migrates legacy field names.
response = client.set_config(config)Prop
Type
Returns: dict with success and message keys.
The parameter type is the same Config model returned by get_config. See the Config Schema for every field, including the nested ProfileExtractorConfig, UserPlaybookExtractorConfig, PlaybookAggregatorConfig, DeduplicationConfig, AgentSuccessConfig, APIKeyConfig, and LLMConfig shapes.
Example — using a preset:
response = client.set_config({
"extraction_preset": "long_form", # auto-sets batch_size=25, batch_interval=10
"user_playbook_extractor_configs": [{
"extractor_name": "quality",
"extraction_definition_prompt": "Extract playbook entries about response quality",
}],
})Example — full configuration:
config = client.get_config()
# Add profile extractors
config.profile_extractor_configs = [
{
"extractor_name": "preferences_extractor",
"extraction_definition_prompt": "Extract user preferences and interests",
"context_prompt": "Analyzing customer conversations",
},
{
"extractor_name": "skill_extractor",
"extraction_definition_prompt": "Extract user technical skill level",
"context_prompt": "Technical support interactions",
"metadata_definition_prompt": "Include skill areas and proficiency levels",
},
]
# Add playbook configuration with aggregation and deduplication
config.user_playbook_extractor_configs = [
{
"extractor_name": "quality_playbook",
"extraction_definition_prompt": "Extract playbook entries about response quality",
"aggregation_config": {
"min_cluster_size": 3,
"clustering_similarity": 0.6,
},
"deduplication_config": {
"search_threshold": 0.4,
"search_top_k": 5,
},
},
{
"extractor_name": "helpfulness_playbook",
"extraction_definition_prompt": "Identify whether the response was helpful to the user",
"aggregation_config": {"min_cluster_size": 5},
},
]
response = client.set_config(config)
print(f"Config updated: {response['success']}")Identity & Storage Routing
Two read-only endpoints expose information about the org and storage backing the current API key. Both are commonly used by managed Reflexio Enterprise users to verify which org and storage their API key resolves to.
whoami
Return the server's view of the caller's org and storage routing. The response is masked — it never contains raw credentials, so it is safe to print or include in bug reports.
identity = client.whoami()
print(identity.org_id, identity.storage_type, identity.storage_label)The reflexio status whoami CLI command wraps this method and prints a formatted summary.
get_my_config
Return the raw storage credentials for the caller's org. Used by reflexio config pull / config storage to let users move per-org server-side config to a fresh machine.
my_config = client.get_my_config()
if my_config.success:
print(my_config.storage_type)
# my_config.storage_config is a dict containing raw credentialsUnlike whoami, this response does contain raw credentials. Do not log or print the full response. Treat it like any other secret material.