Provider Configuration
Before any completion call runs, your handler must register the providers it will use via configure_provider. This is the sole place API keys enter the system — you never pass api_key on a ChatCompletionRequest or FallbackRequest.
Why this pattern#
- Keys never flow through handler code. When you use
voicerun_managed=True, the proxy uses VoiceRun's own API key for that provider. Your agent code never touches it and it never rides on the wire back from the handler. - Explicit provider intent. If you call
generate_chat_completionfor a provider you didn't register, it raisesCompletionsProviderNotConfiguredErrorbefore any network call. No silent fallbacks to a different key source. - Connection warmth. The first
configure_providercall for a provider kicks off a background warm request to the proxy, paying the TLS handshake during a time the user is unlikely to notice (e.g. during your greeting TTS) rather than on the first real turn.
Register providers#
Do this once per session, typically in your StartEvent handler:
from primfunctions.completions import configure_provider from primfunctions.events import Event, StartEvent, TextToSpeechEvent from primfunctions.context import Context async def handler(event: Event, context: Context): if isinstance(event, StartEvent): # Use VoiceRun's mounted key — your code never sees it configure_provider("anthropic", voicerun_managed=True) # Or supply a customer-owned key for this session only configure_provider( "openai", api_key=context.variables.get("OPENAI_API_KEY"), ) yield TextToSpeechEvent(text="Ready.", voice="kore")
Every provider your code touches must be registered#
This includes fallback providers. If your primary is anthropic and you fall back to openai, both need to be registered:
configure_provider("anthropic", voicerun_managed=True) configure_provider("openai", voicerun_managed=True) response = await generate_chat_completion({ "provider": "anthropic", "model": "claude-haiku-4-5", "messages": [UserMessage(content="hi")], "fallbacks": [ # No api_key here — injected automatically from configure_provider {"provider": "openai", "model": "gpt-4.1-mini"}, ], })
Customer keys via context.variables#
Customer-supplied keys come in through agent variables. Register them in StartEvent from context.variables:
configure_provider( "anthropic", api_key=context.variables.get("ANTHROPIC_API_KEY"), )
Variables are set on the agent's environment or per-session via the primvoices API. See Agent building → Context for how context.variables is populated.
Connection reuse is automatic#
primfunctions.completions holds a module-level aiohttp.ClientSession, so repeated calls from your handler reuse the same HTTP connection to the proxy. The proxy in turn holds warm provider SDK clients keyed by (provider, api_key), so repeated requests from the same session reuse the same TLS connection to the upstream LLM.
No client object, context manager, or explicit lifecycle management is required — this happens for free.
CompletionsProviderNotConfiguredError#
from primfunctions.completions import ( CompletionsProviderNotConfiguredError, generate_chat_completion, ) try: await generate_chat_completion({ "provider": "anthropic", "model": "claude-haiku-4-5", "messages": [UserMessage(content="hi")], }) except CompletionsProviderNotConfiguredError as e: # Provider wasn't registered in this session — fix by calling # configure_provider("anthropic", ...) in StartEvent first. yield LogEvent(f"configuration error: {e}")
In practice you should never catch this at runtime — the call to configure_provider belongs in setup code, not inside a turn.
Summary#
- Call
configure_provider(provider, voicerun_managed=True | api_key=...)for every provider (including fallbacks) inStartEvent. - Never set
api_keyon aChatCompletionRequestorFallbackRequest— it's rejected with aValueError. - Reuse is automatic. No client object to hold or close.
Next steps#
- Basic Usage — your first completion
- Reliability — retries and fallbacks using these registered providers
