Skip to content

Provider Adapters

The adapter pattern is the core abstraction that allows the gateway to support 28 AI providers through a single unified API. Each adapter translates requests and responses between the gateway’s internal format and a provider’s native API.

The Adapter Pattern

Unified Request Provider-Specific Request
(UnifiedRequest) (e.g., OpenAI format)
| ^
| toProviderRequest() |
+--------------------------------------->+
|
Provider API
|
+<---------------------------------------+
| fromProviderResponse() |
v v
Unified Response Provider-Specific Response
(UnifiedResponse)

BaseProviderAdapter

All adapters extend BaseProviderAdapter, which provides:

MethodPurpose
toProviderRequest()Convert UnifiedRequest to provider-specific request body (abstract)
fromProviderResponse()Convert provider response to UnifiedResponse (abstract)
execute()Send a non-streaming request to the provider (abstract)
executeStream()Send a streaming request, returning an AsyncGenerator<UnifiedStreamChunk> (abstract)
fromProviderStreamChunk()Transform a single SSE chunk to UnifiedStreamChunk (abstract)
healthCheck()Verify provider connectivity (abstract)
listModels()List available models from the provider (abstract)
buildHeaders()Construct HTTP headers with API key authentication
httpRequest()Make HTTP requests with timeout and SSRF validation
parseSSEStream()Parse Server-Sent Events from a fetch Response
validateUrl()Block SSRF vectors (private IPs, internal hosts)
sanitizeProviderError()Strip API keys from error messages

Optional capability methods have default implementations that throw “not supported”:

  • executeEmbedding() — Vector embeddings
  • executeAudio() — Audio transcription/translation
  • executeImageGeneration() — Image generation
  • executeTextToSpeech() — Text-to-speech
  • executeRerank() — Document reranking
  • executeVideoGeneration() — Video generation

Supported Providers (28)

ProviderAdapter FileKey Capabilities
OpenAIopenai.adapter.tsChat, embeddings, audio, TTS, images, vision, function calling
Anthropicanthropic.adapter.tsChat, vision, function calling, streaming
Azure OpenAIazure-openai.adapter.tsChat, embeddings, audio, TTS, images (OpenAI models via Azure)
Google Geminigoogle-gemini.adapter.tsChat, embeddings, vision, function calling
Groqgroq.adapter.tsChat, streaming (fast inference)
Mistralmistral.adapter.tsChat, embeddings, function calling
Coherecohere.adapter.tsChat, embeddings, rerank
DeepSeekdeepseek.adapter.tsChat, streaming
Together AItogether-ai.adapter.tsChat, embeddings, images
Fireworksfireworks.adapter.tsChat, embeddings, streaming
Perplexityperplexity.adapter.tsChat (search-augmented)
AI21ai21.adapter.tsChat, completions
HuggingFacehuggingface.adapter.tsChat, embeddings
xAIxai.adapter.tsChat, streaming
Cerebrascerebras.adapter.tsChat, streaming (fast inference)
SambaNovasambanova.adapter.tsChat, streaming
Ollamaollama.adapter.tsChat, embeddings (local)
vLLMvllm.adapter.tsChat, embeddings (self-hosted)
LM Studiolmstudio.adapter.tsChat (local, OpenAI-compatible)
LocalAIlocalai.adapter.tsChat, embeddings, audio, TTS, images (local)
llama.cppllamacpp.adapter.tsChat (local)
AssemblyAIassemblyai.adapter.tsAudio transcription
ElevenLabselevenlabs.adapter.tsText-to-speech
Whisper Localwhisper-local.adapter.tsAudio transcription (local)
Replicatereplicate.adapter.tsImages, video generation
ComfyUIcomfyui.adapter.tsImages, video generation (workflow-based)
Stability AIstability.adapter.tsImage generation

Request/Response Translation

Each adapter translates field names and structures. For example, the OpenAI adapter:

  • Maps UnifiedRequest.messages to OpenAI’s messages array format
  • Converts multimodal content (images) to OpenAI’s image_url format
  • Maps topP to top_p, maxTokens to max_tokens
  • Adds stream_options: { include_usage: true } for streaming requests
  • On response, extracts choices[0].message.content into UnifiedResponse.content
  • Normalizes usage fields from prompt_tokens/completion_tokens to inputTokens/outputTokens

Adding a New Provider

To add support for a new AI provider:

  1. Create the adapter file at packages/server/src/providers/<name>.adapter.ts.
  2. Extend BaseProviderAdapter and implement the required abstract methods.
  3. Set providerType to a unique slug (e.g., my-provider).
  4. Set supportedCapabilities to the list of capabilities the provider supports.
  5. Register the adapter in packages/server/src/providers/factory.ts.
  6. Add tests in tests/unit/providers/.

The minimum implementation requires:

  • toProviderRequest() — Map unified format to the provider’s API format
  • fromProviderResponse() — Map the provider’s response back to unified format
  • execute() — Make the HTTP call and return the parsed response
  • executeStream() — Handle SSE streaming (use parseSSEStream() helper)
  • fromProviderStreamChunk() — Parse individual stream chunks
  • healthCheck() — Call a lightweight endpoint (e.g., list models) to verify connectivity
  • listModels() — Return available models

Security

All adapters inherit these security measures from BaseProviderAdapter:

  • SSRF preventionvalidateUrl() blocks requests to private IP ranges and internal hostnames (with an allowlist for admin-configured local providers)
  • API key sanitization — Error messages are scrubbed to remove API keys before logging or returning to clients
  • Request timeout — All HTTP requests have configurable timeouts with abort controllers
  • Retry logic — The gateway service wraps adapter calls with exponential backoff retry on transient errors (connection failures, 429, 5xx)

Next Steps