| title | Adding a New Provider |
|---|---|
| description | Learn how to add a new AI provider to DOAI Proxy by implementing the BaseProvider interface, registering it in the factory, and configuring environment variables. |
This guide explains how to add a new AI provider to DOAI Proxy.
All providers must implement the BaseProvider class from providers/base-provider.js.
Your provider class must implement these methods:
import { BaseProvider } from './base-provider.js';
export class YourProvider extends BaseProvider {
getType() {
return 'yourprovider';
}
getName() {
return 'yourprovider';
}
validateConfig() {
const apiKey = process.env.YOUR_API_KEY;
if (!apiKey) {
throw new Error('YOUR_API_KEY is required');
}
return true;
}
transformRequest(openAIRequest) {
const { messages, model, tools, ...otherParams } = openAIRequest;
return {
model: model,
messages: messages,
};
}
transformResponse(providerResponse) {
return {
id: providerResponse.data.id,
object: 'chat.completion',
created: providerResponse.data.created,
model: providerResponse.data.model,
choices: providerResponse.data.choices,
usage: providerResponse.data.usage,
};
}
async makeRequest(request) {
const response = await axios.post(
`${this.config.apiUrl}/chat/completions`,
request,
{
headers: {
'Authorization': `Bearer ${this.config.apiKey}`,
'Content-Type': 'application/json',
},
timeout: this.config.timeout,
}
);
return response;
}
supportsStreaming() {
return true;
}
supportsTools() {
return true;
}
}Register your provider in providers/provider-factory.js:
import { YourProvider } from './yourprovider-provider.js';
export class ProviderFactory {
static create(providerType, env) {
const type = providerType?.toLowerCase() || 'straico';
switch (type) {
case 'yourprovider':
return new YourProvider(env);
// ... other cases
default:
throw new Error(`Unknown provider type: ${providerType}`);
}
}
}Add environment variables to .env.example:
# YourProvider Configuration
YOUR_API_KEY=your_api_key_here
YOUR_API_URL=https://api.yourprovider.com/v1
YOUR_API_TIMEOUT=60000If your provider doesn't support native streaming, you can reuse the simulated streaming module:
import { simulateStream } from '../streaming.js';
if (req.body.stream && !this.supportsStreaming()) {
await simulateStream(aiResponse, res, {
chunkSize: parseInt(process.env.STREAM_CHUNK_SIZE) || 15,
delay: parseInt(process.env.STREAM_DELAY_MS) || 80,
});
}If your provider doesn't support native function calling, you can reuse the prompt injection module:
import { injectToolsIntoSystem, parseToolCall, formatToolCallResponse } from '../tools.js';
const processedMessages = injectToolsIntoSystem(messages, tools);Add to your .env file:
PROVIDER_TYPE=yourprovider
YOUR_API_KEY=your_actual_keynpm startTest with curl:
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "your-model-name",
"messages": [{"role": "user", "content": "Hello!"}]
}'See existing providers for reference:
- Straico Provider (
providers/straico-provider.js) — Simulated streaming (no native support), prompt injection for tools (no native support), OpenAI-compatible request/response format. Good example of non-native providers.
-
Keep provider-specific logic isolated — Don't mix providers. Each provider should be self-contained.
-
Reuse existing utilities — Use
simulateStream()for non-native streaming,injectToolsIntoSystem()for non-native tools, and logging functions fromutils.js. -
Handle errors gracefully — Provide clear error messages, validate configuration on startup, and use try-catch for API calls.
-
Support both streaming and non-streaming — Let the client decide via the
stream: true/falseparameter. ImplementsupportsStreaming()correctly. -
Support both tools and non-tools — Let the client decide via the
toolsparameter. ImplementsupportsTools()correctly. -
Log requests and responses — Use
logRequest()andlogResponse()fromutils.js,logProviderResponse()with provider type, andlogError()for errors. -
Follow OpenAI compatibility — Request format should match OpenAI's
/v1/chat/completions, response format should match OpenAI's chat completion format, and SSE format should use the standarddata: {...}\n\npattern.
Before submitting your provider:
- All required methods are implemented
-
validateConfig()throws clear errors for missing config -
transformRequest()produces valid provider requests -
transformResponse()produces OpenAI-compatible responses -
makeRequest()handles timeouts and errors -
supportsStreaming()returns correct value -
supportsTools()returns correct value - Non-streaming requests work correctly
- Streaming requests work correctly (native or simulated)
- Tool requests work correctly (native or prompt injection)
- Error messages are clear and helpful
- Environment variables are documented in
.env.example - Provider is added to
ProviderFactory - README.md is updated with provider information