diff --git a/api-reference/authentication.mdx b/api-reference/authentication.mdx
index 632f8f9..75c6d47 100644
--- a/api-reference/authentication.mdx
+++ b/api-reference/authentication.mdx
@@ -1,40 +1,35 @@
---
title: "Authentication"
description: "How to authenticate to the Edgee API"
+icon: key-round
---
-The Edgee API uses API tokens to authenticate requests. You can view and manage your API token in the
-Edgee [Console](https://www.edgee.cloud/settings/tokens).
+The Edgee API uses API keys to authenticate requests. You can view and manage your API keys in the
+Edgee [Console](https://www.edgee.cloud/). Please refer to the [Create an API Key](/quickstart/api-key) guide to know more about how to create an API key.
-Your API tokens carry many privileges, so be sure to keep them secure! Do not share your tokens in
-publicly accessible areas such as GitHub, client-side code, and so forth.
+
+ Your API keys carry many privileges, so be sure to keep them secure!
+
+ Do not share your API keys in publicly accessible areas such as GitHub, client-side code, and so forth.
+
Authentication to the API is performed via Bearer authentication (also called token authentication).
It is an HTTP authentication scheme that involves security tokens called bearer tokens. The client must send this token
in the Authorization header when making requests to protected resources:
```bash
-Authorization: Bearer
+Authorization: Bearer
```
-If you need to authenticate via HTTP Basic Auth,
-use `-u {{token}}:` instead of `-H "Authorization: Bearer {{token}}"`.
-
All API requests must be made over HTTPS. Calls made over plain HTTP will fail. API requests without authentication
will also fail.
```bash cURL with Bearer
- curl 'https://api.edgee.ai/v1/projects' \
- -H 'Authorization: Bearer '
+ curl 'https://api.edgee.ai/v1/models' \
+ -H 'Authorization: Bearer '
```
-
- ```bash cURL with Basic Auth
- curl 'https://api.edgee.ai/v1/projects' \
- -u ':'
- # The colon prevents curl from asking for a password.
- ```
-
+
diff --git a/api-reference/caching/purge-cache.mdx b/api-reference/caching/purge-cache.mdx
deleted file mode 100644
index 34852c3..0000000
--- a/api-reference/caching/purge-cache.mdx
+++ /dev/null
@@ -1,8 +0,0 @@
----
-title: 'Purge Cache'
-openapi: 'POST /v1/projects/{id}/purge-cache'
----
-
-When you purge cache, Edgee will remove the specified cached content from all edge locations worldwide, which is useful when you need to update cached content, clear stale data after deployments, force fresh content to be served to users, or troubleshoot caching issues.
-
-Query strings are automatically removed from the path before purging and cache is purged for all domains associated with the project across all edge locations worldwide.
diff --git a/api-reference/chat-completion.mdx b/api-reference/chat-completion.mdx
new file mode 100644
index 0000000..9e66622
--- /dev/null
+++ b/api-reference/chat-completion.mdx
@@ -0,0 +1,7 @@
+---
+title: 'Chat Completion'
+description: 'Create chat completions using the Edgee AI Gateway API'
+openapi: 'POST /v1/chat/completions'
+---
+
+Creates a completion for the chat message. The Edgee API is OpenAI-compatible and works with any model and provider. Supports both streaming and non-streaming responses.
diff --git a/api-reference/errors.mdx b/api-reference/errors.mdx
index b424790..abc6195 100644
--- a/api-reference/errors.mdx
+++ b/api-reference/errors.mdx
@@ -1,64 +1,222 @@
---
title: "Errors"
description: "How Edgee API responds when errors occur."
+icon: circle-x
---
-Edgee uses conventional HTTP response codes to indicate the success or failure of an API request.
-In general: Codes in the 2xx range indicate success.
-Codes in the 4xx range indicate an error that failed given the information provided (e.g., a required parameter
-was omitted, a charge failed, etc.).
-Codes in the 5xx range indicate an error with Edgee's servers.
+Edgee uses conventional HTTP response codes to indicate the success or failure of an API request. In general: Codes in the 2xx range indicate success. Codes in the 4xx range indicate an error that failed given the information provided (e.g., a required parameter was omitted, authentication failed, etc.). Codes in the 5xx range indicate an error with Edgee's servers.
-Some 4xx errors that could be handled programmatically include
-an error code that briefly explains the error reported.
+When an error occurs, the API returns a JSON object with an `error` field containing details about what went wrong.
-### Parameters
+## Error Response Format
-
- The type of error returned.
+
+ The error object.
- One of invalid_request_error, not_found_error, creation_error,
- update_error, deletion_error, forbidden_error, or authentication_error.
+
+ A machine-readable error code that briefly explains the error reported.
+
+
+
+ A human-readable message providing more details about the error.
+
-
- A human-readable message providing more details about the error.
+
+```json Error Response Example
+{
+ "error": {
+ "code": "bad_model_id",
+ "message": "Invalid model ID: 'invalid-model'"
+ }
+}
+```
+
+
+## HTTP Status Code Summary
+
+Below is a summary of the HTTP status codes that Edgee API uses.
+
+| HTTP Code | Status | Description |
+| --------- | ------ | ----------- |
+| 200 | OK | Everything worked as expected. |
+| 400 | Bad Request | The request was unacceptable, often due to missing a required parameter, invalid model ID, model not found, or provider not supported. |
+| 401 | Unauthorized | No valid API key provided, or the Authorization header is missing or malformed. |
+| 403 | Forbidden | The API key doesn't have permissions to perform the request. This can occur if the key is inactive, expired, or the requested model is not allowed for this key. |
+| 404 | Not Found | The requested resource doesn't exist. |
+| 429 | Too Many Requests | Too many requests hit the API too quickly, or usage limit exceeded. We recommend an exponential backoff of your requests. |
+| 500, 502, 503, 504 | Server Errors | Something went wrong on Edgee's end. (These are rare.) |
+
+## Error Codes
+
+### 400 Bad Request
+
+
+ One of the following error codes:
+
+ - `bad_model_id`: The model ID format is invalid
+ - `model_not_found`: The requested model does not exist or is not available
+ - `provider_not_supported`: The requested provider is not supported for the specified model
-
- If the error is parameter-specific, this will contain a list of the parameters that were invalid.
+
+```json Bad Model ID
+{
+ "error": {
+ "code": "bad_model_id",
+ "message": "Invalid model ID: 'invalid-model'"
+ }
+}
+```
+
+```json Model Not Found
+{
+ "error": {
+ "code": "model_not_found",
+ "message": "Model 'openai/gpt-1' not found"
+ }
+}
+```
+
+```json Provider Not Supported
+{
+ "error": {
+ "code": "provider_not_supported",
+ "message": "Provider 'anthropic' is not supported for model 'openai/gpt-4o'"
+ }
+}
+```
+
+
+### 401 Unauthorized
+
+
+ Always `"unauthorized"`.
-## HTTP Status Code Summary
+
+```json Missing Authorization Header
+{
+ "error": {
+ "code": "unauthorized",
+ "message": "Missing Authorization header"
+ }
+}
+```
-Bellow is a summary of the HTTP status codes that Edgee API uses.
-
-| HTTP Code | Status | Description |
-| ------------------ | ------ | ----------- |
-| 200 | OK | Everything worked as expected. |
-| 400 | Bad Request | The request was unacceptable, often due to missing a required parameter. |
-| 401 | Unauthorized | No valid API key provided. |
-| 402 | Request Failed | The parameters were valid but the request failed. |
-| 403 | Forbidden | The API key doesn't have permissions to perform the request. |
-| 404 | Not Found | The requested resource doesn't exist. |
-| 409 | Conflict | The request conflicts with another request (perhaps due to using the same idempotent key). |
-| 429 | Too Many Requests | Too many requests hit the API too quickly. We recommend an exponential backoff of your requests. |
-| 500, 502, 503, 504 | Server Errors | Something went wrong on Edgee's end. (These are rare.) |
+```json Invalid Authorization Format
+{
+ "error": {
+ "code": "unauthorized",
+ "message": "Invalid Authorization header format"
+ }
+}
+```
+
+```json Failed to Retrieve API Key
+{
+ "error": {
+ "code": "unauthorized",
+ "message": "Failed to retrieve API key: "
+ }
+}
+```
+
+
+### 403 Forbidden
+
+
+ Always `"forbidden"`.
+
+
+
+```json Inactive Key
+{
+ "error": {
+ "code": "forbidden",
+ "message": "API key is inactive"
+ }
+}
+```
+
+```json Expired Key
+{
+ "error": {
+ "code": "forbidden",
+ "message": "API key has expired"
+ }
+}
+```
+
+```json Model Not Allowed
+{
+ "error": {
+ "code": "forbidden",
+ "message": "Model 'openai/gpt-4o' is not allowed for this API key"
+ }
+}
+```
+
+
+### 429 Too Many Requests
+
+
+ Always `"usage_limit_exceeded"`.
+
+
+
+```json Usage Limit Exceeded
+{
+ "error": {
+ "code": "usage_limit_exceeded",
+ "message": "Usage limit exceeded: 1000.00 / 1000 tokens used"
+ }
+}
+```
+
+```json No Credits Remaining
+{
+ "error": {
+ "code": "usage_limit_exceeded",
+ "message": "Organization has no credits remaining"
+ }
+}
+```
+
+
+### 500 Internal Server Error
+
+When a server error occurs, the API may return a generic error response. These errors are rare and typically indicate an issue on Edgee's side.
-```json Response Example
+```json Server Error
{
"error": {
- "type": "invalid_request_error",
- "params": [
- {
- "param": "name",
- "message": "This field is required"
- }
- ],
- "message": "Parameter error"
+ "code": "internal_error",
+ "message": "An internal error occurred. Please try again later."
}
}
```
+## Handling Errors
+
+When you receive an error response:
+
+1. **Check the HTTP status code** to understand the general category of the error
+2. **Read the error code** (`error.code`) to understand the specific issue
+3. **Review the error message** (`error.message`) for additional context
+4. **Take appropriate action**:
+ - **400 errors**: Fix the request parameters and retry
+ - **401 errors**: Check your API key and authentication headers
+ - **403 errors**: Verify your API key permissions and status
+ - **429 errors**: Implement exponential backoff and retry logic
+ - **5xx errors**: Retry after a delay, or contact support if the issue persists
+
+## Rate Limiting
+
+If you exceed the rate limits, you will receive a `429 Too Many Requests` response. We recommend implementing exponential backoff when you encounter rate limit errors:
+
+1. Wait for the time specified in the `Retry-After` header (if present)
+2. Retry the request with exponential backoff
+3. Reduce the rate of requests to stay within limits
diff --git a/api-reference/index.mdx b/api-reference/index.mdx
index d8be12b..72d73f6 100644
--- a/api-reference/index.mdx
+++ b/api-reference/index.mdx
@@ -1,11 +1,12 @@
---
-title: 'Introduction'
-description: 'A brief introduction to the Edgee API.'
+title: 'Overview'
+description: 'A brief introduction to the Edgee AI Gateway API.'
+icon: book
---
-Welcome to the Edgee API documentation. This guide will help you understand how to interact with the Edgee API to create,
-retrieve, update, and delete Edgee resources through HTTP requests.
+Welcome to the Edgee AI Gateway API documentation. This guide will help you understand how to interact with the Edgee API to create chat completions and manage models through HTTP requests.
+Edgee is an edge-native AI Gateway with private model hosting, automatic model selection, cost audits/alerts, and edge tools. The API is **OpenAI-compatible**, providing one API for any model and any provider.
## Base URL
@@ -17,20 +18,46 @@ https://api.edgee.ai
## Authentication
-The Edgee API uses bearer authentication. When making requests, you must include your API token in the `Authorization`
-header in the format `Bearer `. For more details, please refer to the [Authentication](./authentication) page.
+The Edgee API uses bearer authentication. When making requests, you must include your API Key in the `Authorization` header in the format `Bearer `. For more details, please refer to the [Authentication](./authentication) page.
## Errors
-When an error occurs, the Edgee API responds with a conventional HTTP response code and a JSON object containing more
-details about the error. For more information, please refer to the [Errors](./errors) page.
+When an error occurs, the Edgee API responds with a conventional HTTP response code and a JSON object containing more details about the error. For more information, please refer to the [Errors](./errors) page.
## Rate Limiting
-Please note that the Edgee API has rate limits to prevent abuse and ensure service stability. If you exceed these limits,
-your requests will be throttled and you will receive a `429 Too Many Requests` response.
+Please note that the Edgee has its own rate limit technology to prevent abuse and ensure service stability.
+If you exceed these limits, your requests will be throttled and you will receive a `429 Too Many Requests` response.
+Additionally, usage limits may be enforced based on your API key configuration.
+
+## Features
+
+
+
+ **OpenAI-Compatible API**
+
+ Fully compatible with the OpenAI API format, making it easy to switch between providers or use multiple providers through a single interface.
+
+
+ **Multi-Provider Support**
+
+ Access models from multiple providers (OpenAI, Anthropic, etc.) through a single API endpoint. Simply specify the model using the format `{author_id}/{model_id}`.
+
+
+ **Streaming Support**
+
+ Both streaming and non-streaming responses are supported. Enable streaming by setting `stream: true` to receive Server-Sent Events (SSE) with partial message deltas.
+
+
+ **Function Calling**
+
+ The API supports function calling (tools) that allows models to call external functions, enabling more interactive and powerful applications.
+
+
+ **Usage Tracking**
+
+ Every response includes detailed usage statistics: token counts (prompt, completion, total), cached tokens, and reasoning tokens.
+
+
----
-We hope this guide helps you get started with the Edgee API. If you have any questions, please don't hesitate to reach
-out to our support team.
diff --git a/api-reference/models.mdx b/api-reference/models.mdx
new file mode 100644
index 0000000..8c0dfb2
--- /dev/null
+++ b/api-reference/models.mdx
@@ -0,0 +1,7 @@
+---
+title: 'Models'
+description: 'List all available models in the Edgee AI Gateway'
+openapi: 'GET /v1/models'
+---
+
+Lists the currently available models, and provides basic information about each one such as the owner and availability. Returns only active models.
diff --git a/api-reference/openapi.json b/api-reference/openapi.json
index ca4d9a3..8ed12bf 100644
--- a/api-reference/openapi.json
+++ b/api-reference/openapi.json
@@ -2,11 +2,17 @@
"openapi": "3.0.1",
"info": {
"title": "Edgee API",
- "version": "1"
+ "version": "1.0.0",
+ "description": "Edgee is an edge-native AI Gateway with private model hosting, automatic model selection, cost audits/alerts, and edge tools. This API is OpenAI-compatible, providing one API for any model and any provider."
},
"servers": [
{
- "url": "https://api.edgee.ai"
+ "url": "https://api.edgee.ai",
+ "description": "Edgee AI Gateway"
+ },
+ {
+ "url": "http://localhost:7676",
+ "description": "Edgee AI Gateway (Local Development)"
}
],
"security": [
@@ -15,46 +21,75 @@
}
],
"paths": {
- "/v1/projects/{id}/purge-cache": {
+ "/v1/chat/completions": {
"post": {
- "operationId": "purgeProjectCache",
- "summary": "Purge cache for a project",
- "description": "Purge the cache for a specific project. You can purge all cache or purge cache for a specific path. When purging by path, the cache for all domains associated with the project will be purged for that path.",
- "parameters": [
- {
- "name": "id",
- "in": "path",
- "required": true,
- "schema": {
- "type": "string"
- },
- "description": "The project ID"
- }
- ],
+ "operationId": "createChatCompletion",
+ "summary": "Create chat completion",
+ "description": "Creates a completion for the chat message. Supports both streaming and non-streaming responses. The API is OpenAI-compatible and works with any model and provider.",
+ "tags": ["Chat"],
"requestBody": {
+ "required": true,
"content": {
"application/json": {
"schema": {
- "$ref": "#/components/schemas/PurgeCacheInput"
+ "$ref": "#/components/schemas/ChatCompletionRequest"
}
}
- },
- "required": true
+ }
},
"responses": {
"200": {
- "description": "Cache purged successfully",
+ "description": "Chat completion created successfully",
"content": {
"application/json": {
"schema": {
- "type": "object",
- "properties": {
- "message": {
- "type": "string",
- "example": "Cache purged"
+ "$ref": "#/components/schemas/ChatCompletionResponse"
+ },
+ "example": {
+ "id": "chatcmpl-123",
+ "object": "chat.completion",
+ "created": 1677652288,
+ "model": "openai/gpt-4o",
+ "choices": [
+ {
+ "index": 0,
+ "message": {
+ "role": "assistant",
+ "content": "Hello! How can I assist you today?"
+ },
+ "finish_reason": "stop"
+ }
+ ],
+ "usage": {
+ "prompt_tokens": 10,
+ "completion_tokens": 10,
+ "total_tokens": 20,
+ "input_tokens_details": {
+ "cached_tokens": 0
+ },
+ "output_tokens_details": {
+ "reasoning_tokens": 0
}
}
}
+ },
+ "text/event-stream": {
+ "schema": {
+ "type": "string",
+ "format": "binary",
+ "description": "Server-Sent Events stream. Each event is a JSON object prefixed with 'data: ' and followed by two newlines. The stream consists of multiple `ChatCompletionChunk` objects, and optionally a final chunk with usage statistics if `stream_options.include_usage` is true."
+ },
+ "examples": {
+ "contentChunk": {
+ "value": "data: {\"id\":\"chatcmpl-123\",\"object\":\"chat.completion.chunk\",\"created\":1677652288,\"model\":\"openai/gpt-4o\",\"choices\":[{\"index\":0,\"delta\":{\"content\":\"Hello\"},\"finish_reason\":null}]}\n\n"
+ },
+ "roleChunk": {
+ "value": "data: {\"id\":\"chatcmpl-123\",\"object\":\"chat.completion.chunk\",\"created\":1677652288,\"model\":\"openai/gpt-4o\",\"choices\":[{\"index\":0,\"delta\":{\"role\":\"assistant\"},\"finish_reason\":null}]}\n\n"
+ },
+ "finalChunk": {
+ "value": "data: {\"id\":\"chatcmpl-123\",\"object\":\"chat.completion.chunk\",\"created\":1677652288,\"model\":\"openai/gpt-4o\",\"choices\":[{\"index\":0,\"delta\":{},\"finish_reason\":\"stop\"}],\"usage\":{\"prompt_tokens\":10,\"completion_tokens\":10,\"total_tokens\":20,\"input_tokens_details\":{\"cached_tokens\":0},\"output_tokens_details\":{\"reasoning_tokens\":0}}}\n\n"
+ }
+ }
}
}
},
@@ -64,26 +99,158 @@
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
+ },
+ "examples": {
+ "badModelId": {
+ "value": {
+ "error": {
+ "code": "bad_model_id",
+ "message": "Invalid model ID: 'invalid-model'"
+ }
+ }
+ },
+ "modelNotFound": {
+ "value": {
+ "error": {
+ "code": "model_not_found",
+ "message": "Model 'openai/gpt-1' not found"
+ }
+ }
+ },
+ "providerNotSupported": {
+ "value": {
+ "error": {
+ "code": "provider_not_supported",
+ "message": "Provider 'anthropic' is not supported for model 'openai/gpt-4o'"
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "401": {
+ "description": "Unauthorized - missing or invalid API key",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/ErrorResponse"
+ },
+ "example": {
+ "error": {
+ "code": "unauthorized",
+ "message": "Missing Authorization header"
+ }
}
}
}
},
"403": {
- "description": "Forbidden - insufficient permissions",
+ "description": "Forbidden - API key is inactive, expired, or model not allowed",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
+ },
+ "examples": {
+ "inactiveKey": {
+ "value": {
+ "error": {
+ "code": "forbidden",
+ "message": "API key is inactive"
+ }
+ }
+ },
+ "expiredKey": {
+ "value": {
+ "error": {
+ "code": "forbidden",
+ "message": "API key has expired"
+ }
+ }
+ },
+ "modelNotAllowed": {
+ "value": {
+ "error": {
+ "code": "forbidden",
+ "message": "Model 'openai/gpt-4o' is not allowed for this API key"
+ }
+ }
+ }
}
}
}
},
- "404": {
- "description": "Project not found",
+ "429": {
+ "description": "Too many requests - usage limit exceeded",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
+ },
+ "example": {
+ "error": {
+ "code": "usage_limit_exceeded",
+ "message": "Usage limit exceeded: 1000.00 / 1000 tokens used"
+ }
+ }
+ }
+ }
+ },
+ "500": {
+ "description": "Internal server error",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/ErrorResponse"
+ }
+ }
+ }
+ }
+ }
+ }
+ },
+ "/v1/models": {
+ "get": {
+ "operationId": "listModels",
+ "summary": "List models",
+ "description": "Lists the currently available models, and provides basic information about each one such as the owner and availability. Returns only active models.",
+ "tags": ["Models"],
+ "parameters": [
+ {
+ "name": "provider",
+ "in": "query",
+ "description": "Filter models by provider (optional, currently not implemented)",
+ "required": false,
+ "schema": {
+ "type": "string"
+ }
+ }
+ ],
+ "responses": {
+ "200": {
+ "description": "List of available models",
+ "content": {
+ "application/json": {
+ "schema": {
+ "$ref": "#/components/schemas/ModelsResponse"
+ },
+ "example": {
+ "object": "list",
+ "data": [
+ {
+ "id": "openai/gpt-4o",
+ "object": "model",
+ "created": 1677610602,
+ "owned_by": "openai"
+ },
+ {
+ "id": "anthropic/claude-3-opus",
+ "object": "model",
+ "created": 1677610602,
+ "owned_by": "anthropic"
+ }
+ ]
}
}
}
@@ -103,16 +270,453 @@
}
},
"components": {
- "parameters": {
- },
"schemas": {
+ "ChatCompletionRequest": {
+ "type": "object",
+ "required": ["model", "messages"],
+ "properties": {
+ "model": {
+ "type": "string",
+ "description": "ID of the model to use. Format: `{author_id}/{model_id}` (e.g. `openai/gpt-4o`)",
+ "example": "openai/gpt-4o"
+ },
+ "messages": {
+ "type": "array",
+ "description": "A list of messages comprising the conversation so far.",
+ "items": {
+ "$ref": "#/components/schemas/Message"
+ },
+ "minItems": 1
+ },
+ "max_tokens": {
+ "type": "integer",
+ "description": "The maximum number of tokens that can be generated in the chat completion.",
+ "minimum": 1
+ },
+ "stream": {
+ "type": "boolean",
+ "description": "If set, partial message deltas will be sent, as in OpenAI. Streamed chunks are sent as Server-Sent Events (SSE).",
+ "default": false
+ },
+ "stream_options": {
+ "type": "object",
+ "description": "Options for streaming response.",
+ "properties": {
+ "include_usage": {
+ "type": "boolean",
+ "description": "If set, an additional `[DONE]` message will be sent with usage statistics when the stream is finished."
+ }
+ }
+ },
+ "tools": {
+ "type": "array",
+ "description": "A list of tools the model may call. Currently, only `function` type is supported.",
+ "items": {
+ "$ref": "#/components/schemas/Tool"
+ }
+ },
+ "tool_choice": {
+ "oneOf": [
+ {
+ "type": "string",
+ "enum": ["none", "auto"],
+ "description": "Controls which (if any) tool is called by the model. `none` means the model will not call any tool. `auto` means the model can pick between generating a message or calling a tool."
+ },
+ {
+ "$ref": "#/components/schemas/ToolChoiceSpecific"
+ }
+ ],
+ "description": "Controls which tool is called by the model."
+ }
+ }
+ },
+ "Message": {
+ "type": "object",
+ "required": ["role"],
+ "properties": {
+ "role": {
+ "type": "string",
+ "enum": ["system", "user", "assistant", "tool", "developer"],
+ "description": "The role of the message author. Required properties vary by role:\n- `system`, `user`, `developer`: requires `content`\n- `assistant`: `content` is optional (can be empty if `tool_calls` is present)\n- `tool`: requires `content` and `tool_call_id`"
+ },
+ "content": {
+ "type": "string",
+ "description": "The contents of the message. Required for all roles except `assistant` (where it can be empty if `tool_calls` is present). For `assistant` role, defaults to empty string if not provided."
+ },
+ "name": {
+ "type": "string",
+ "description": "An optional name for the participant. Provides the model information to differentiate between participants of the same role. Used for `system`, `user`, `assistant`, and `developer` roles."
+ },
+ "tool_call_id": {
+ "type": "string",
+ "description": "The ID of the tool call that this message is responding to. Required for `tool` role only."
+ },
+ "refusal": {
+ "type": "string",
+ "description": "The refusal message from the model, if any. Used for `assistant` role only."
+ },
+ "tool_calls": {
+ "type": "array",
+ "description": "The tool calls made by the assistant. Used for `assistant` role only.",
+ "items": {
+ "$ref": "#/components/schemas/ToolCall"
+ }
+ }
+ }
+ },
+ "Tool": {
+ "type": "object",
+ "required": ["type", "function"],
+ "properties": {
+ "type": {
+ "type": "string",
+ "enum": ["function"],
+ "description": "The type of the tool. Currently, only `function` is supported."
+ },
+ "function": {
+ "$ref": "#/components/schemas/FunctionDefinition"
+ }
+ }
+ },
+ "FunctionDefinition": {
+ "type": "object",
+ "required": ["name"],
+ "properties": {
+ "name": {
+ "type": "string",
+ "description": "The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."
+ },
+ "description": {
+ "type": "string",
+ "description": "A description of what the function does, used by the model to choose when and how to call the function."
+ },
+ "parameters": {
+ "type": "object",
+ "description": "The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.",
+ "additionalProperties": true
+ }
+ }
+ },
+ "ToolChoiceSpecific": {
+ "type": "object",
+ "required": ["type", "function"],
+ "properties": {
+ "type": {
+ "type": "string",
+ "enum": ["function"],
+ "description": "The type of the tool."
+ },
+ "function": {
+ "$ref": "#/components/schemas/ToolChoiceFunction"
+ }
+ }
+ },
+ "ToolChoiceFunction": {
+ "type": "object",
+ "required": ["name"],
+ "properties": {
+ "name": {
+ "type": "string",
+ "description": "The name of the function to call."
+ }
+ }
+ },
+ "ToolCall": {
+ "type": "object",
+ "required": ["id", "type", "function"],
+ "properties": {
+ "id": {
+ "type": "string",
+ "description": "The ID of the tool call."
+ },
+ "type": {
+ "type": "string",
+ "enum": ["function"],
+ "description": "The type of the tool call."
+ },
+ "function": {
+ "$ref": "#/components/schemas/FunctionCall"
+ }
+ }
+ },
+ "FunctionCall": {
+ "type": "object",
+ "required": ["name", "arguments"],
+ "properties": {
+ "name": {
+ "type": "string",
+ "description": "The name of the function to call."
+ },
+ "arguments": {
+ "type": "string",
+ "description": "The arguments to call the function with, as JSON."
+ }
+ }
+ },
+ "ChatCompletionResponse": {
+ "type": "object",
+ "required": ["id", "object", "created", "model", "choices", "usage"],
+ "properties": {
+ "id": {
+ "type": "string",
+ "description": "A unique identifier for the chat completion.",
+ "example": "chatcmpl-123"
+ },
+ "object": {
+ "type": "string",
+ "enum": ["chat.completion"],
+ "description": "The object type, which is always `chat.completion`."
+ },
+ "created": {
+ "type": "integer",
+ "description": "The Unix timestamp (in seconds) of when the chat completion was created.",
+ "example": 1677652288
+ },
+ "model": {
+ "type": "string",
+ "description": "The model used for the chat completion.",
+ "example": "openai/gpt-4o"
+ },
+ "choices": {
+ "type": "array",
+ "description": "A list of chat completion choices. Can be more than one if n is greater than 1.",
+ "items": {
+ "$ref": "#/components/schemas/ChatCompletionChoice"
+ }
+ },
+ "usage": {
+ "$ref": "#/components/schemas/Usage"
+ }
+ }
+ },
+ "ChatCompletionChoice": {
+ "type": "object",
+ "required": ["index", "message"],
+ "properties": {
+ "index": {
+ "type": "integer",
+ "description": "The index of the choice in the list of choices.",
+ "minimum": 0
+ },
+ "message": {
+ "$ref": "#/components/schemas/AssistantMessage"
+ },
+ "finish_reason": {
+ "type": "string",
+ "enum": ["stop", "length", "content_filter", "tool_calls"],
+ "description": "The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, or `tool_calls` if the model called a tool."
+ }
+ }
+ },
+ "Usage": {
+ "type": "object",
+ "description": "Usage statistics for the completion. In streaming responses, this is only present in the final chunk when `stream_options.include_usage` is true.",
+ "required": ["prompt_tokens", "completion_tokens", "total_tokens", "input_tokens_details", "output_tokens_details"],
+ "properties": {
+ "prompt_tokens": {
+ "type": "integer",
+ "description": "Number of tokens in the prompt.",
+ "minimum": 0
+ },
+ "completion_tokens": {
+ "type": "integer",
+ "description": "Number of tokens in the generated completion.",
+ "minimum": 0
+ },
+ "total_tokens": {
+ "type": "integer",
+ "description": "Total number of tokens used in the request (prompt + completion).",
+ "minimum": 0
+ },
+ "input_tokens_details": {
+ "$ref": "#/components/schemas/InputTokenDetails"
+ },
+ "output_tokens_details": {
+ "$ref": "#/components/schemas/OutputTokenDetails"
+ }
+ }
+ },
+ "InputTokenDetails": {
+ "type": "object",
+ "description": "Additional details about input tokens.",
+ "properties": {
+ "cached_tokens": {
+ "type": "integer",
+ "description": "Number of cached tokens in the input.",
+ "minimum": 0
+ }
+ }
+ },
+ "OutputTokenDetails": {
+ "type": "object",
+ "description": "Additional details about output tokens.",
+ "properties": {
+ "reasoning_tokens": {
+ "type": "integer",
+ "description": "Number of reasoning tokens in the output.",
+ "minimum": 0
+ }
+ }
+ },
+ "ModelsResponse": {
+ "type": "object",
+ "required": ["object", "data"],
+ "properties": {
+ "object": {
+ "type": "string",
+ "enum": ["list"],
+ "description": "The object type, which is always `list`."
+ },
+ "data": {
+ "type": "array",
+ "description": "The list of models.",
+ "items": {
+ "$ref": "#/components/schemas/Model"
+ }
+ }
+ }
+ },
+ "Model": {
+ "type": "object",
+ "required": ["id", "object", "created", "owned_by"],
+ "properties": {
+ "id": {
+ "type": "string",
+ "description": "The model identifier, which can be referenced in the API. Format: `{author_id}/{model_id}`.",
+ "example": "openai/gpt-4o"
+ },
+ "object": {
+ "type": "string",
+ "enum": ["model"],
+ "description": "The object type, which is always `model`."
+ },
+ "created": {
+ "type": "integer",
+ "description": "The Unix timestamp (in seconds) when the model was created.",
+ "example": 1677610602
+ },
+ "owned_by": {
+ "type": "string",
+ "description": "The organization that owns the model.",
+ "example": "openai"
+ }
+ }
+ },
+ "ChatCompletionChunk": {
+ "type": "object",
+ "required": ["id", "object", "created", "model", "choices"],
+ "description": "A streaming chunk in the chat completion response. Used when `stream: true` in the request.",
+ "properties": {
+ "id": {
+ "type": "string",
+ "description": "A unique identifier for the chat completion chunk.",
+ "example": "chatcmpl-123"
+ },
+ "object": {
+ "type": "string",
+ "enum": ["chat.completion.chunk"],
+ "description": "The object type, which is always `chat.completion.chunk` for streaming responses."
+ },
+ "created": {
+ "type": "integer",
+ "description": "The Unix timestamp (in seconds) of when the chat completion was created.",
+ "example": 1677652288
+ },
+ "model": {
+ "type": "string",
+ "description": "The model used for the chat completion.",
+ "example": "openai/gpt-4o"
+ },
+ "choices": {
+ "type": "array",
+ "description": "A list of chat completion choices for this chunk.",
+ "items": {
+ "$ref": "#/components/schemas/ChatCompletionChunkChoice"
+ }
+ },
+ "usage": {
+ "$ref": "#/components/schemas/Usage"
+ }
+ }
+ },
+ "ChatCompletionChunkChoice": {
+ "type": "object",
+ "required": ["index", "delta"],
+ "description": "A choice in a streaming chat completion chunk.",
+ "properties": {
+ "index": {
+ "type": "integer",
+ "description": "The index of the choice in the list of choices.",
+ "minimum": 0
+ },
+ "delta": {
+ "$ref": "#/components/schemas/Delta",
+ "description": "A delta representing the change in the message content. The first chunk typically contains `role`, subsequent chunks contain `content`."
+ },
+ "finish_reason": {
+ "type": "string",
+ "enum": ["stop", "length", "content_filter", "tool_calls"],
+ "description": "The reason the model stopped generating tokens. This will be `null` for all chunks except the final one. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, or `tool_calls` if the model called a tool."
+ }
+ }
+ },
+ "Delta": {
+ "type": "object",
+ "description": "Represents a change in message content. The first chunk typically contains `role`, subsequent chunks contain `content`.",
+ "properties": {
+ "role": {
+ "type": "string",
+ "description": "The role of the message author. Typically present only in the first chunk.",
+ "example": "assistant"
+ },
+ "content": {
+ "type": "string",
+ "description": "The content of the message delta. Present in content chunks.",
+ "example": "Hello"
+ }
+ }
+ },
+ "ErrorResponse": {
+ "type": "object",
+ "required": ["error"],
+ "description": "Error response.",
+ "$ref": "#/components/schemas/ErrorResponse",
+ "properties": {
+ "error": {
+ "type": "object",
+ "required": ["code", "message"],
+ "properties": {
+ "code": {
+ "type": "string",
+ "description": "A machine-readable error code.",
+ "examples": ["bad_model_id", "model_not_found", "provider_not_supported", "unauthorized", "forbidden", "usage_limit_exceeded"]
+ },
+ "message": {
+ "type": "string",
+ "description": "A human-readable error message."
+ }
+ }
+ }
+ }
+ }
},
"securitySchemes": {
"bearerAuth": {
"type": "http",
"scheme": "bearer",
- "description": "Bearer authentication header of the form `Bearer `, where `` is your auth token. More info [here]('/docs/api-reference/authentication')"
+ "bearerFormat": "JWT",
+ "description": "Bearer authentication header of the form `Bearer `, where `` is your API key. More info [here](/docs/api-reference/authentication)"
}
}
- }
+ },
+ "tags": [
+ {
+ "name": "Chat",
+ "description": "Chat completion endpoints"
+ },
+ {
+ "name": "Models",
+ "description": "Model management endpoints"
+ }
+ ]
}
diff --git a/docs.json b/docs.json
index 3d5bc52..479bd0f 100644
--- a/docs.json
+++ b/docs.json
@@ -83,10 +83,50 @@
"group": "SDK Documentation",
"pages": [
"sdk/index",
- "sdk/typescript/index",
- "sdk/python/index",
- "sdk/rust/index",
- "sdk/go/index",
+ {
+ "group": "TypeScript",
+ "icon": "https://d3gk2c5xim1je2.cloudfront.net/devicon/typescript.svg",
+ "pages": [
+ "sdk/typescript/index",
+ "sdk/typescript/configuration",
+ "sdk/typescript/send",
+ "sdk/typescript/stream",
+ "sdk/typescript/tools"
+ ]
+ },
+ {
+ "group": "Python",
+ "icon": "python",
+ "pages": [
+ "sdk/python/index",
+ "sdk/python/configuration",
+ "sdk/python/send",
+ "sdk/python/stream",
+ "sdk/python/tools"
+ ]
+ },
+ {
+ "group": "Rust",
+ "icon": "rust",
+ "pages": [
+ "sdk/rust/index",
+ "sdk/rust/configuration",
+ "sdk/rust/send",
+ "sdk/rust/stream",
+ "sdk/rust/tools"
+ ]
+ },
+ {
+ "group": "Go",
+ "icon": "golang",
+ "pages": [
+ "sdk/go/index",
+ "sdk/go/configuration",
+ "sdk/go/send",
+ "sdk/go/stream",
+ "sdk/go/tools"
+ ]
+ },
"sdk/openai/index"
]
}
@@ -95,20 +135,21 @@
{
"tab": "API Reference",
"icon": "terminal",
- "hidden": true,
+
"groups": [
{
- "group": "API Documentation",
+ "group": "Introduction",
"pages": [
- "api-reference/index",
- "api-reference/authentication",
- "api-reference/errors",
- {
- "group": "Caching",
- "pages": [
- "api-reference/caching/purge-cache"
- ]
- }
+ "api-reference/index",
+ "api-reference/authentication",
+ "api-reference/errors"
+ ]
+ },
+ {
+ "group": "Endpoints",
+ "pages": [
+ "api-reference/chat-completion",
+ "api-reference/models"
]
}
]
diff --git a/images/byok-dark.png b/images/byok-dark.png
new file mode 100644
index 0000000..6ffc6ab
Binary files /dev/null and b/images/byok-dark.png differ
diff --git a/images/byok-light.png b/images/byok-light.png
new file mode 100644
index 0000000..6019216
Binary files /dev/null and b/images/byok-light.png differ
diff --git a/introduction.mdx b/introduction.mdx
index d98a790..c8615e2 100644
--- a/introduction.mdx
+++ b/introduction.mdx
@@ -11,64 +11,75 @@ Edgee is a **unified AI Gateway** that sits between your application and LLM pro
## Get Started in 6 Lines
-
-
-```typescript
-import Edgee from 'edgee';
-const edgee = new Edgee();
-
-const response = await edgee.send({
- model: 'gpt-5.1',
- input: 'Hello, world!'
-});
-
-console.log(response.output_text);
-```
-
-```python
-from edgee import Edgee
-client = Edgee()
-
-response = client.send(
- model="gpt-5.1",
- input="Write a short bedtime story about a unicorn."
-)
-
-print(response.output_text)
-```
-
-```go
-import "github.com/edgee-cloud/go-sdk"
-edgee := edgee.NewEdgee()
-
-response := edgee.Send(&edgee.SendRequest{
- Model: "gpt-5.1",
- Input: "Hello, world!",
-})
-
-println(response.OutputText)
-```
-
-```rust
-use edgee::Edgee;
-
-let edgee = Edgee::new();
-
-let response = edgee.send(edgee::SendRequest {
- model: "gpt-5.1".to_string(),
- input: "Hello, world!".to_string(),
-})?;
-
-println!("{}", response.output_text);
-```
-
-```bash
-curl -X POST https://api.edgee.ai/v1/send \
--H "Authorization: Bearer $EDGEE_API_KEY" \
--H "Content-Type: application/json" \
--d '{"model": "gpt-5.1", "input": "Hello, world!"}'
-```
-
+
+
+ ```typescript
+ import Edgee from 'edgee';
+
+ const edgee = new Edgee("your-api-key");
+
+ const response = await edgee.send({
+ model: 'gpt-4o',
+ input: 'What is the capital of France?',
+ });
+
+ console.log(response.text);
+ // "The capital of France is Paris."
+ ```
+
+
+
+ ```python
+ from edgee import Edgee
+
+ edgee = Edgee("your-api-key")
+
+ response = edgee.send(
+ model="gpt-4o",
+ input="What is the capital of France?"
+ )
+
+ print(response.text)
+ # "The capital of France is Paris."
+ ```
+
+
+
+ ```go
+ package main
+
+ import (
+ "fmt"
+ "log"
+ "github.com/edgee-cloud/go-sdk/edgee"
+ )
+
+ func main() {
+ client, _ := edgee.NewClient("your-api-key")
+
+ response, err := client.Send("gpt-4o", "What is the capital of France?")
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ fmt.Println(response.Text())
+ // "The capital of France is Paris."
+ }
+ ```
+
+
+
+ ```rust
+ use edgee::Edgee;
+
+ let client = Edgee::with_api_key("your-api-key");
+ let response = client.send("gpt-4o", "What is the capital of France?").await.unwrap();
+
+ println!("{}", response.text().unwrap_or(""));
+ // "The capital of France is Paris."
+ ```
+
+
That's it. You now have access to every major LLM provider, automatic failovers, cost tracking, and full observability, all through one simple API.
diff --git a/introduction/why-edgee.mdx b/introduction/why-edgee.mdx
index 2d285c6..0f7f580 100644
--- a/introduction/why-edgee.mdx
+++ b/introduction/why-edgee.mdx
@@ -64,8 +64,10 @@ Powered by **Fastly** and **AWS**, our network spans six continents:
With a single Edgee API key, you get instant access to every supported model; OpenAI, Anthropic, Google, Mistral, and more. No need to manage multiple provider accounts or juggle API keys:
+
+
```typescript
-const edgee = new Edgee(process.env.EDGEE_API_KEY);
+const edgee = new Edgee();
// Access any model with the same key
await edgee.send({ model: 'gpt-4o', input: 'Hello, world!' });
@@ -73,21 +75,57 @@ await edgee.send({ model: 'claude-sonnet-4.5', input: 'Hello, world!' });
await edgee.send({ model: 'gemini-3-pro', input: 'Hello, world!' });
```
-### Bring Your Own Keys
+```python
+from edgee import Edgee
-Need more control? Use your existing provider API keys alongside Edgee. This gives you direct billing relationships, access to custom fine-tuned models, and the ability to use provider-specific features:
+edgee = Edgee()
-```typescript
-const edgee = new Edgee({
- apiKey: process.env.EDGEE_API_KEY,
- providers: {
- openai: { apiKey: process.env.OPENAI_API_KEY },
- anthropic: { apiKey: process.env.ANTHROPIC_API_KEY },
- },
-});
-
-// Requests to OpenAI/Anthropic use YOUR keys
-// Other providers fall back to Edgee's unified access
+# Access any model with the same key
+edgee.send(model='gpt-4o', input='Hello, world!')
+edgee.send(model='claude-sonnet-4.5', input='Hello, world!')
+edgee.send(model='gemini-3-pro', input='Hello, world!')
+```
+
+```rust
+use edgee::Edgee;
+
+let client = Edgee::from_env()?;
+
+// Access any model with the same key
+client.send("gpt-4o", "Hello, world!").await?;
+client.send("claude-sonnet-4.5", "Hello, world!").await?;
+client.send("gemini-3-pro", "Hello, world!").await?;
+```
+
+```go
+import "github.com/edgee-cloud/go-sdk/edgee"
+
+client, _ := edgee.NewClient(nil)
+
+// Access any model with the same key
+client.Send("gpt-4o", "Hello, world!")
+client.Send("claude-sonnet-4.5", "Hello, world!")
+client.Send("gemini-3-pro", "Hello, world!")
```
+
+
+
+### Bring Your Own Keys
+
+Need more control? Use your existing provider API keys alongside Edgee. This gives you direct billing relationships, access to custom fine-tuned models, and the ability to use provider-specific features.
+
You can mix both approaches—use Edgee's unified access for some providers and your own keys for others.
+
+
+
+
+
\ No newline at end of file
diff --git a/package-lock.json b/package-lock.json
index d39a1e1..fb6cdf4 100644
--- a/package-lock.json
+++ b/package-lock.json
@@ -5,7 +5,7 @@
"packages": {
"": {
"dependencies": {
- "mintlify": "^4.2.264"
+ "mintlify": "^4.2.272"
}
},
"node_modules/@alcalzone/ansi-tokenize": {
@@ -85,12 +85,12 @@
}
},
"node_modules/@babel/code-frame": {
- "version": "7.27.1",
- "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.27.1.tgz",
- "integrity": "sha512-cjQ7ZlQ0Mv3b47hABuTevyTuYN4i+loJKGeV9flcCgIK37cCXRh+L1bd3iBHlynerhQ7BhCkn2BPbQUL+rGqFg==",
+ "version": "7.28.6",
+ "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.28.6.tgz",
+ "integrity": "sha512-JYgintcMjRiCvS8mMECzaEn+m3PfoQiyqukOMCCVQtoJGYJw8j/8LBJEiqkHLkfwCcs74E3pbAUFNg7d9VNJ+Q==",
"license": "MIT",
"dependencies": {
- "@babel/helper-validator-identifier": "^7.27.1",
+ "@babel/helper-validator-identifier": "^7.28.5",
"js-tokens": "^4.0.0",
"picocolors": "^1.1.1"
},
@@ -135,7 +135,6 @@
"resolved": "https://registry.npmjs.org/@floating-ui/core/-/core-1.7.3.tgz",
"integrity": "sha512-sGnvb5dmrJaKEZ+LDIpguvdX3bDlEllmv4/ClQ9awcmCZrlx5jQyyMWFM5kBI+EyNOCDDiKk8il0zeuX3Zlg/w==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@floating-ui/utils": "^0.2.10"
}
@@ -145,7 +144,6 @@
"resolved": "https://registry.npmjs.org/@floating-ui/dom/-/dom-1.7.4.tgz",
"integrity": "sha512-OOchDgh4F2CchOX94cRVqhvy7b3AFb+/rQXyswmzmGakRfkMgoWVjfnLWkRirfLEfuD4ysVW16eXzwt3jHIzKA==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@floating-ui/core": "^1.7.3",
"@floating-ui/utils": "^0.2.10"
@@ -156,7 +154,6 @@
"resolved": "https://registry.npmjs.org/@floating-ui/react-dom/-/react-dom-2.1.6.tgz",
"integrity": "sha512-4JX6rEatQEvlmgU80wZyq9RT96HZJa88q8hp0pBd+LrczeDI4o6uA2M+uvxngVHo4Ihr8uibXxH6+70zhAFrVw==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@floating-ui/dom": "^1.7.4"
},
@@ -169,8 +166,7 @@
"version": "0.2.10",
"resolved": "https://registry.npmjs.org/@floating-ui/utils/-/utils-0.2.10.tgz",
"integrity": "sha512-aGTxbpbg8/b5JfU1HXSrbH3wXZuLPJcNEcZQFMxLs3oSzgtVu6nFPkbbGGUvBcUjKV2YyB9Wxxabo+HEH9tcRQ==",
- "license": "MIT",
- "peer": true
+ "license": "MIT"
},
"node_modules/@img/sharp-darwin-arm64": {
"version": "0.33.5",
@@ -999,18 +995,18 @@
}
},
"node_modules/@mintlify/cli": {
- "version": "4.0.868",
- "resolved": "https://registry.npmjs.org/@mintlify/cli/-/cli-4.0.868.tgz",
- "integrity": "sha512-sFe5A06EpRx/2hqBDbJwbVJyK6q8IreZCNupTBVskPPK0jGfV2bRAf5E8BrArxhZT7a5YrOeyKcqdB10n1Lzig==",
+ "version": "4.0.876",
+ "resolved": "https://registry.npmjs.org/@mintlify/cli/-/cli-4.0.876.tgz",
+ "integrity": "sha512-XfUn6HC3xInRXA+TNoTi0gDRr3mBbIqK+DZI5ht0zrMtrVPyZbE0vUw5QQzP04LL/fKC4PwGL9benjarKPFOqQ==",
"license": "Elastic-2.0",
"dependencies": {
"@inquirer/prompts": "7.9.0",
- "@mintlify/common": "1.0.656",
- "@mintlify/link-rot": "3.0.808",
- "@mintlify/models": "0.0.254",
- "@mintlify/prebuild": "1.0.788",
- "@mintlify/previewing": "4.0.843",
- "@mintlify/validation": "0.1.554",
+ "@mintlify/common": "1.0.664",
+ "@mintlify/link-rot": "3.0.816",
+ "@mintlify/models": "0.0.256",
+ "@mintlify/prebuild": "1.0.796",
+ "@mintlify/previewing": "4.0.851",
+ "@mintlify/validation": "0.1.557",
"adm-zip": "0.5.16",
"chalk": "5.2.0",
"color": "4.2.3",
@@ -1035,16 +1031,16 @@
}
},
"node_modules/@mintlify/common": {
- "version": "1.0.656",
- "resolved": "https://registry.npmjs.org/@mintlify/common/-/common-1.0.656.tgz",
- "integrity": "sha512-xNAzM14iO0KUlnBtHbiF7ldL2LRTbBpMgI6T6vEkGT8nKw5id/xcaAelPt07i1F7KiSKAEFz1ezsPBzVC6yn/w==",
+ "version": "1.0.664",
+ "resolved": "https://registry.npmjs.org/@mintlify/common/-/common-1.0.664.tgz",
+ "integrity": "sha512-wkxX3ouV9JOMh1w2rM5JUyb0nDgmpexP50w0cK+h23nskYWJ5ZLdy84EAFOiLpeBElCaH4eAlK561El/l8XYDw==",
"license": "ISC",
"dependencies": {
"@asyncapi/parser": "3.4.0",
"@mintlify/mdx": "^3.0.4",
- "@mintlify/models": "0.0.254",
+ "@mintlify/models": "0.0.256",
"@mintlify/openapi-parser": "^0.0.8",
- "@mintlify/validation": "0.1.554",
+ "@mintlify/validation": "0.1.557",
"@sindresorhus/slugify": "2.2.0",
"@types/mdast": "4.0.4",
"acorn": "8.11.2",
@@ -1277,21 +1273,20 @@
"resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.23.2.tgz",
"integrity": "sha512-UOShsPwz7NrMUqhR6t0hWjFduvOzbtv7toDH1/hIrfRNIDBnnBWd0CwJTGvTpngVlmwGCdP9/Zl/tVrDqcuYzQ==",
"license": "MIT",
- "peer": true,
"dependencies": {
"loose-envify": "^1.1.0"
}
},
"node_modules/@mintlify/link-rot": {
- "version": "3.0.808",
- "resolved": "https://registry.npmjs.org/@mintlify/link-rot/-/link-rot-3.0.808.tgz",
- "integrity": "sha512-HlgZMGy4R8vIqzCcX+0wS5+sSnVsb8K7CcoHeknP46u1PjDREj3yBu9NcgnI5h25Ze0gFYryoj/jV9xQucp1aA==",
+ "version": "3.0.816",
+ "resolved": "https://registry.npmjs.org/@mintlify/link-rot/-/link-rot-3.0.816.tgz",
+ "integrity": "sha512-FQTqzTiKQ72HENlFgVSvctfHv3GdTo86OSKbAdBubrxMBcjP4UpaSn+PkJlCoKcJo2SxtFnXjsP9mVPyBZ29uw==",
"license": "Elastic-2.0",
"dependencies": {
- "@mintlify/common": "1.0.656",
- "@mintlify/prebuild": "1.0.788",
- "@mintlify/previewing": "4.0.843",
- "@mintlify/validation": "0.1.554",
+ "@mintlify/common": "1.0.664",
+ "@mintlify/prebuild": "1.0.796",
+ "@mintlify/previewing": "4.0.851",
+ "@mintlify/validation": "0.1.557",
"fs-extra": "11.1.0",
"unist-util-visit": "4.1.2"
},
@@ -1362,9 +1357,9 @@
}
},
"node_modules/@mintlify/models": {
- "version": "0.0.254",
- "resolved": "https://registry.npmjs.org/@mintlify/models/-/models-0.0.254.tgz",
- "integrity": "sha512-oYTsmrVaGyRQj10g2Oewt+HpDK7pH0ZTox7cHvd/dlu0Yti5CE9oKl4PPfgd5g4Z9Gm8ilEhKDquiX9cs5x7lA==",
+ "version": "0.0.256",
+ "resolved": "https://registry.npmjs.org/@mintlify/models/-/models-0.0.256.tgz",
+ "integrity": "sha512-675dorOg6E5C4NEtewFYJdsg202Xcc2Ke9/1JMrCYtfzPhry8I4v+cWcvfU5X94XRl/IUA38udR2050UDQ2eNw==",
"license": "Elastic-2.0",
"dependencies": {
"axios": "1.10.0",
@@ -1409,15 +1404,15 @@
}
},
"node_modules/@mintlify/prebuild": {
- "version": "1.0.788",
- "resolved": "https://registry.npmjs.org/@mintlify/prebuild/-/prebuild-1.0.788.tgz",
- "integrity": "sha512-yX9YxwkM44qVYC8Wzd5PY/aesJz8+7JvNFj3Z/PQUmOzreCCIC9zHp2g3mglASlLz0nc3qDajG0CzU9QuX3jvw==",
+ "version": "1.0.796",
+ "resolved": "https://registry.npmjs.org/@mintlify/prebuild/-/prebuild-1.0.796.tgz",
+ "integrity": "sha512-6YsMdjglUkwwsihM3WWyr1GbPPlzoOAahCQwShU/aYEHv3b1TxUDUWQt95eb418t3tyAAxRI9x8hA+OenyoDIQ==",
"license": "Elastic-2.0",
"dependencies": {
- "@mintlify/common": "1.0.656",
+ "@mintlify/common": "1.0.664",
"@mintlify/openapi-parser": "^0.0.8",
- "@mintlify/scraping": "4.0.517",
- "@mintlify/validation": "0.1.554",
+ "@mintlify/scraping": "4.0.525",
+ "@mintlify/validation": "0.1.557",
"chalk": "5.3.0",
"favicons": "7.2.0",
"front-matter": "4.0.2",
@@ -1505,14 +1500,14 @@
}
},
"node_modules/@mintlify/previewing": {
- "version": "4.0.843",
- "resolved": "https://registry.npmjs.org/@mintlify/previewing/-/previewing-4.0.843.tgz",
- "integrity": "sha512-7S5sIwA3qYl1Kk6HSO/fJBbsdwG0uoz0I6d4m5preQbzzxZrGG9TXt7WdVmfz2vPw9mx2sf0QAJtQ4o8/R7L3w==",
+ "version": "4.0.851",
+ "resolved": "https://registry.npmjs.org/@mintlify/previewing/-/previewing-4.0.851.tgz",
+ "integrity": "sha512-s1A1PPy2OkcFO6D0bNGqf5bd4AJqAhvOrw1KdiQ1/+KCqSygj0Kwvw4FMtU7FTK3orLwDz09rMR36uIv5so1Hw==",
"license": "Elastic-2.0",
"dependencies": {
- "@mintlify/common": "1.0.656",
- "@mintlify/prebuild": "1.0.788",
- "@mintlify/validation": "0.1.554",
+ "@mintlify/common": "1.0.664",
+ "@mintlify/prebuild": "1.0.796",
+ "@mintlify/validation": "0.1.557",
"better-opn": "3.0.2",
"chalk": "5.2.0",
"chokidar": "3.5.3",
@@ -1598,12 +1593,12 @@
}
},
"node_modules/@mintlify/scraping": {
- "version": "4.0.517",
- "resolved": "https://registry.npmjs.org/@mintlify/scraping/-/scraping-4.0.517.tgz",
- "integrity": "sha512-bTVJUfwwPdSKEcchbF4DYxEwSwQpjIaUWAninqOFROoQDJHN75D8vejoX/BKO5Tl77u28+g1Qsz/PEY1E6wK2g==",
+ "version": "4.0.525",
+ "resolved": "https://registry.npmjs.org/@mintlify/scraping/-/scraping-4.0.525.tgz",
+ "integrity": "sha512-AEopiECV6yYK3/6dibScN8q9PW6b5IKy336kjZcWBTGGzaSckawTfSj2rxc0g1pfqnBKqrc1r9HMgWwslCCJ3g==",
"license": "Elastic-2.0",
"dependencies": {
- "@mintlify/common": "1.0.656",
+ "@mintlify/common": "1.0.664",
"@mintlify/openapi-parser": "^0.0.8",
"fs-extra": "11.1.1",
"hast-util-to-mdast": "10.1.0",
@@ -1681,13 +1676,13 @@
}
},
"node_modules/@mintlify/validation": {
- "version": "0.1.554",
- "resolved": "https://registry.npmjs.org/@mintlify/validation/-/validation-0.1.554.tgz",
- "integrity": "sha512-kn6V/N6qCJdO3PDoL3h4614Zh7a+iMTt57g5fQU6WhhYTW9x46xeJPMqQyNJVsBPUkadYsCGTLVt/zTC6dJYUA==",
+ "version": "0.1.557",
+ "resolved": "https://registry.npmjs.org/@mintlify/validation/-/validation-0.1.557.tgz",
+ "integrity": "sha512-bMglyufJ80mvop1Edu70Wile2jGRjO59iHV8UBhDXzhWdO6mnyl2O8lUFmh70ed7YyKEbnElytUX4wS9PegKsQ==",
"license": "Elastic-2.0",
"dependencies": {
"@mintlify/mdx": "^3.0.4",
- "@mintlify/models": "0.0.254",
+ "@mintlify/models": "0.0.256",
"arktype": "2.1.27",
"js-yaml": "4.1.0",
"lcm": "0.0.3",
@@ -1839,7 +1834,6 @@
"resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.23.2.tgz",
"integrity": "sha512-UOShsPwz7NrMUqhR6t0hWjFduvOzbtv7toDH1/hIrfRNIDBnnBWd0CwJTGvTpngVlmwGCdP9/Zl/tVrDqcuYzQ==",
"license": "MIT",
- "peer": true,
"dependencies": {
"loose-envify": "^1.1.0"
}
@@ -1982,15 +1976,13 @@
"version": "1.1.3",
"resolved": "https://registry.npmjs.org/@radix-ui/primitive/-/primitive-1.1.3.tgz",
"integrity": "sha512-JTF99U/6XIjCBo0wqkU5sK10glYe27MRRsfwoiq5zzOEZLHU3A3KCMa5X/azekYRCJ0HlwI0crAXS/5dEHTzDg==",
- "license": "MIT",
- "peer": true
+ "license": "MIT"
},
"node_modules/@radix-ui/react-arrow": {
"version": "1.1.7",
"resolved": "https://registry.npmjs.org/@radix-ui/react-arrow/-/react-arrow-1.1.7.tgz",
"integrity": "sha512-F+M1tLhO+mlQaOWspE8Wstg+z6PwxwRd8oQ8IXceWz92kfAmalTRf0EjrouQeo7QssEPfCn05B4Ihs1K9WQ/7w==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@radix-ui/react-primitive": "2.1.3"
},
@@ -2014,7 +2006,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-compose-refs/-/react-compose-refs-1.1.2.tgz",
"integrity": "sha512-z4eqJvfiNnFMHIIvXP3CY57y2WJs5g2v3X0zm9mEJkrkNv4rDxu+sg9Jh8EkXyeqBkB7SOcboo9dMVqhyrACIg==",
"license": "MIT",
- "peer": true,
"peerDependencies": {
"@types/react": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
@@ -2030,7 +2021,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-context/-/react-context-1.1.2.tgz",
"integrity": "sha512-jCi/QKUM2r1Ju5a3J64TH2A5SpKAgh0LpknyqdQ4m6DCV0xJ2HG1xARRwNGPQfi1SLdLWZ1OJz6F4OMBBNiGJA==",
"license": "MIT",
- "peer": true,
"peerDependencies": {
"@types/react": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
@@ -2046,7 +2036,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-dismissable-layer/-/react-dismissable-layer-1.1.11.tgz",
"integrity": "sha512-Nqcp+t5cTB8BinFkZgXiMJniQH0PsUt2k51FUhbdfeKvc4ACcG2uQniY/8+h1Yv6Kza4Q7lD7PQV0z0oicE0Mg==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@radix-ui/primitive": "1.1.3",
"@radix-ui/react-compose-refs": "1.1.2",
@@ -2074,7 +2063,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-focus-guards/-/react-focus-guards-1.1.3.tgz",
"integrity": "sha512-0rFg/Rj2Q62NCm62jZw0QX7a3sz6QCQU0LpZdNrJX8byRGaGVTqbrW9jAoIAHyMQqsNpeZ81YgSizOt5WXq0Pw==",
"license": "MIT",
- "peer": true,
"peerDependencies": {
"@types/react": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
@@ -2090,7 +2078,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-focus-scope/-/react-focus-scope-1.1.7.tgz",
"integrity": "sha512-t2ODlkXBQyn7jkl6TNaw/MtVEVvIGelJDCG41Okq/KwUsJBwQ4XVZsHAVUkK4mBv3ewiAS3PGuUWuY2BoK4ZUw==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@radix-ui/react-compose-refs": "1.1.2",
"@radix-ui/react-primitive": "2.1.3",
@@ -2116,7 +2103,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-id/-/react-id-1.1.1.tgz",
"integrity": "sha512-kGkGegYIdQsOb4XjsfM97rXsiHaBwco+hFI66oO4s9LU+PLAC5oJ7khdOVFxkhsmlbpUqDAvXw11CluXP+jkHg==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@radix-ui/react-use-layout-effect": "1.1.1"
},
@@ -2135,7 +2121,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-popper/-/react-popper-1.2.8.tgz",
"integrity": "sha512-0NJQ4LFFUuWkE7Oxf0htBKS6zLkkjBH+hM1uk7Ng705ReR8m/uelduy1DBo0PyBXPKVnBA6YBlU94MBGXrSBCw==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@floating-ui/react-dom": "^2.0.0",
"@radix-ui/react-arrow": "1.1.7",
@@ -2168,7 +2153,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-portal/-/react-portal-1.1.9.tgz",
"integrity": "sha512-bpIxvq03if6UNwXZ+HTK71JLh4APvnXntDc6XOX8UVq4XQOVl7lwok0AvIl+b8zgCw3fSaVTZMpAPPagXbKmHQ==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@radix-ui/react-primitive": "2.1.3",
"@radix-ui/react-use-layout-effect": "1.1.1"
@@ -2193,7 +2177,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-presence/-/react-presence-1.1.5.tgz",
"integrity": "sha512-/jfEwNDdQVBCNvjkGit4h6pMOzq8bHkopq458dPt2lMjx+eBQUohZNG9A7DtO/O5ukSbxuaNGXMjHicgwy6rQQ==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@radix-ui/react-compose-refs": "1.1.2",
"@radix-ui/react-use-layout-effect": "1.1.1"
@@ -2218,7 +2201,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-primitive/-/react-primitive-2.1.3.tgz",
"integrity": "sha512-m9gTwRkhy2lvCPe6QJp4d3G1TYEUHn/FzJUtq9MjH46an1wJU+GdoGC5VLof8RX8Ft/DlpshApkhswDLZzHIcQ==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@radix-ui/react-slot": "1.2.3"
},
@@ -2242,7 +2224,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-slot/-/react-slot-1.2.3.tgz",
"integrity": "sha512-aeNmHnBxbi2St0au6VBVC7JXFlhLlOnvIIlePNniyUNAClzmtAUEY8/pBiK3iHjufOlwA+c20/8jngo7xcrg8A==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@radix-ui/react-compose-refs": "1.1.2"
},
@@ -2261,7 +2242,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-use-callback-ref/-/react-use-callback-ref-1.1.1.tgz",
"integrity": "sha512-FkBMwD+qbGQeMu1cOHnuGB6x4yzPjho8ap5WtbEJ26umhgqVXbhekKUQO+hZEL1vU92a3wHwdp0HAcqAUF5iDg==",
"license": "MIT",
- "peer": true,
"peerDependencies": {
"@types/react": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
@@ -2277,7 +2257,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-use-controllable-state/-/react-use-controllable-state-1.2.2.tgz",
"integrity": "sha512-BjasUjixPFdS+NKkypcyyN5Pmg83Olst0+c6vGov0diwTEo6mgdqVR6hxcEgFuh4QrAs7Rc+9KuGJ9TVCj0Zzg==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@radix-ui/react-use-effect-event": "0.0.2",
"@radix-ui/react-use-layout-effect": "1.1.1"
@@ -2297,7 +2276,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-use-effect-event/-/react-use-effect-event-0.0.2.tgz",
"integrity": "sha512-Qp8WbZOBe+blgpuUT+lw2xheLP8q0oatc9UpmiemEICxGvFLYmHm9QowVZGHtJlGbS6A6yJ3iViad/2cVjnOiA==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@radix-ui/react-use-layout-effect": "1.1.1"
},
@@ -2316,7 +2294,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-use-escape-keydown/-/react-use-escape-keydown-1.1.1.tgz",
"integrity": "sha512-Il0+boE7w/XebUHyBjroE+DbByORGR9KKmITzbR7MyQ4akpORYP/ZmbhAr0DG7RmmBqoOnZdy2QlvajJ2QA59g==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@radix-ui/react-use-callback-ref": "1.1.1"
},
@@ -2335,7 +2312,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-use-layout-effect/-/react-use-layout-effect-1.1.1.tgz",
"integrity": "sha512-RbJRS4UWQFkzHTTwVymMTUv8EqYhOp8dOOviLj2ugtTiXRaRQS7GLGxZTLL1jWhMeoSCf5zmcZkqTl9IiYfXcQ==",
"license": "MIT",
- "peer": true,
"peerDependencies": {
"@types/react": "*",
"react": "^16.8 || ^17.0 || ^18.0 || ^19.0 || ^19.0.0-rc"
@@ -2351,7 +2327,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-use-rect/-/react-use-rect-1.1.1.tgz",
"integrity": "sha512-QTYuDesS0VtuHNNvMh+CjlKJ4LJickCMUAqjlE3+j8w+RlRpwyX3apEQKGFzbZGdo7XNG1tXa+bQqIE7HIXT2w==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@radix-ui/rect": "1.1.1"
},
@@ -2370,7 +2345,6 @@
"resolved": "https://registry.npmjs.org/@radix-ui/react-use-size/-/react-use-size-1.1.1.tgz",
"integrity": "sha512-ewrXRDTAqAXlkl6t/fkXWNAhFX9I+CkKlw6zjEwk86RSPKwZr3xpBRso655aqYafwtnbpHLj6toFzmd6xdVptQ==",
"license": "MIT",
- "peer": true,
"dependencies": {
"@radix-ui/react-use-layout-effect": "1.1.1"
},
@@ -2388,8 +2362,7 @@
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/@radix-ui/rect/-/rect-1.1.1.tgz",
"integrity": "sha512-HPwpGIzkl28mWyZqG52jiqDJ12waP11Pa1lGoiyUkIEuMLBP0oeK/C89esbXrxsky5we7dfd8U58nm0SgAWpVw==",
- "license": "MIT",
- "peer": true
+ "license": "MIT"
},
"node_modules/@shikijs/core": {
"version": "3.21.0",
@@ -3004,9 +2977,9 @@
"license": "MIT"
},
"node_modules/@types/katex": {
- "version": "0.16.7",
- "resolved": "https://registry.npmjs.org/@types/katex/-/katex-0.16.7.tgz",
- "integrity": "sha512-HMwFiRujE5PjrgwHQ25+bsLJgowjGjm5Z8FVSf0N6PwgJrwxH0QxzHYDcKsTfV3wva0vzrpqMTJS2jXPr5BMEQ==",
+ "version": "0.16.8",
+ "resolved": "https://registry.npmjs.org/@types/katex/-/katex-0.16.8.tgz",
+ "integrity": "sha512-trgaNyfU+Xh2Tc+ABIb44a5AYUpicB3uwirOioeOkNPPbmgRNtcWyDeeFRzjPZENO9Vq8gvVqfhaaXWLlevVwg==",
"license": "MIT"
},
"node_modules/@types/mdast": {
@@ -3040,18 +3013,19 @@
}
},
"node_modules/@types/node": {
- "version": "25.0.3",
- "resolved": "https://registry.npmjs.org/@types/node/-/node-25.0.3.tgz",
- "integrity": "sha512-W609buLVRVmeW693xKfzHeIV6nJGGz98uCPfeXI1ELMLXVeKYZ9m15fAMSaUPBHYLGFsVRcMmSCksQOrZV9BYA==",
+ "version": "25.0.7",
+ "resolved": "https://registry.npmjs.org/@types/node/-/node-25.0.7.tgz",
+ "integrity": "sha512-C/er7DlIZgRJO7WtTdYovjIFzGsz0I95UlMyR9anTb4aCpBSRWe5Jc1/RvLKUfzmOxHPGjSE5+63HgLtndxU4w==",
"license": "MIT",
+ "peer": true,
"dependencies": {
"undici-types": "~7.16.0"
}
},
"node_modules/@types/react": {
- "version": "19.2.7",
- "resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.7.tgz",
- "integrity": "sha512-MWtvHrGZLFttgeEj28VXHxpmwYbor/ATPYbBfSFZEIRK0ecCFLl2Qo55z52Hss+UV9CRN7trSeq1zbgx7YDWWg==",
+ "version": "19.2.8",
+ "resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.8.tgz",
+ "integrity": "sha512-3MbSL37jEchWZz2p2mjntRZtPt837ij10ApxKfgmXCTuHWagYg7iA5bqPw6C8BMPfwidlvfPI/fxOc42HLhcyg==",
"license": "MIT",
"peer": true,
"dependencies": {
@@ -3128,6 +3102,7 @@
"resolved": "https://registry.npmjs.org/acorn/-/acorn-8.11.2.tgz",
"integrity": "sha512-nc0Axzp/0FILLEVsm4fNwLCwMttvhEI263QtVPQcbpfZZ3ts0hLsZGOpE6czNlid7CJ9MlyH8reXkpsf3YUY4w==",
"license": "MIT",
+ "peer": true,
"bin": {
"acorn": "bin/acorn"
},
@@ -3192,6 +3167,7 @@
"resolved": "https://registry.npmjs.org/ajv/-/ajv-8.17.1.tgz",
"integrity": "sha512-B/gBuNg5SiMTrPkC+A2+cW0RszwxYmn6VYxB/inlBStS5nx6xHIt/ehKRhIMhqusl7a8LjQoZnjCs5vhwxOQ1g==",
"license": "MIT",
+ "peer": true,
"dependencies": {
"fast-deep-equal": "^3.1.3",
"fast-uri": "^3.0.1",
@@ -3318,7 +3294,6 @@
"resolved": "https://registry.npmjs.org/aria-hidden/-/aria-hidden-1.2.6.tgz",
"integrity": "sha512-ik3ZgC9dY/lYVVM++OISsaYDeg1tb0VtP5uL3ouh1koGOaUMDPpbFIei4JkFimWUFPn90sbMNMXQAIVOlnYKJA==",
"license": "MIT",
- "peer": true,
"dependencies": {
"tslib": "^2.0.0"
},
@@ -3330,8 +3305,7 @@
"version": "2.8.1",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
"integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
- "license": "0BSD",
- "peer": true
+ "license": "0BSD"
},
"node_modules/arkregex": {
"version": "0.0.3",
@@ -4424,8 +4398,7 @@
"version": "3.2.3",
"resolved": "https://registry.npmjs.org/csstype/-/csstype-3.2.3.tgz",
"integrity": "sha512-z1HGKcYy2xA8AGQfwrn0PAy+PB7X/GSj3UVJW9qKyn43xWa+gl5nXmU4qqLMRzWVLFC8KusUX8T/0kCiOYpAIQ==",
- "license": "MIT",
- "peer": true
+ "license": "MIT"
},
"node_modules/data-uri-to-buffer": {
"version": "6.0.2",
@@ -4696,8 +4669,7 @@
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/detect-node-es/-/detect-node-es-1.1.0.tgz",
"integrity": "sha512-ypdmJU/TbBby2Dxibuv7ZLW3Bs1QEmM7nHjEANfohJLvE0XVujisn1qPJcZxg+qDucsr+bP6fLD1rPS3AhJ7EQ==",
- "license": "MIT",
- "peer": true
+ "license": "MIT"
},
"node_modules/detect-port": {
"version": "1.5.1",
@@ -4730,7 +4702,8 @@
"version": "0.0.1312386",
"resolved": "https://registry.npmjs.org/devtools-protocol/-/devtools-protocol-0.0.1312386.tgz",
"integrity": "sha512-DPnhUXvmvKT2dFA/j7B+riVLUt9Q6RKJlcppojL5CoRywJJKLDYnRlw0gTFKfgDPHP5E04UoB71SxoJlVZy8FA==",
- "license": "BSD-3-Clause"
+ "license": "BSD-3-Clause",
+ "peer": true
},
"node_modules/didyoumean": {
"version": "1.2.2",
@@ -5842,7 +5815,6 @@
"resolved": "https://registry.npmjs.org/get-nonce/-/get-nonce-1.0.1.tgz",
"integrity": "sha512-FJhYRoDaiatfEkUK8HKlicmu/3SGFD51q3itKDGoSTysQJBnfOcxU5GxnhE1E6soB76MbT0MBtnKJuXyAx+96Q==",
"license": "MIT",
- "peer": true,
"engines": {
"node": ">=6"
}
@@ -7357,6 +7329,7 @@
"resolved": "https://registry.npmjs.org/jsep/-/jsep-1.4.0.tgz",
"integrity": "sha512-B7qPcEVE3NVkmSJbaYxvv4cHkVW7DQsZz13pUMrfS8z8Q/BuShN+gcTXrUlPiGqM2/t/EEaI030bpxMqY8gMlw==",
"license": "MIT",
+ "peer": true,
"engines": {
"node": ">= 10.16.0"
}
@@ -7512,7 +7485,6 @@
"resolved": "https://registry.npmjs.org/loose-envify/-/loose-envify-1.4.0.tgz",
"integrity": "sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q==",
"license": "MIT",
- "peer": true,
"dependencies": {
"js-tokens": "^3.0.0 || ^4.0.0"
},
@@ -8796,12 +8768,12 @@
}
},
"node_modules/mintlify": {
- "version": "4.2.264",
- "resolved": "https://registry.npmjs.org/mintlify/-/mintlify-4.2.264.tgz",
- "integrity": "sha512-vuuYbFxu1BHJtIA6NPXXdSPsbe/70r08Gl8AwGjfQr4x+NlG6YfdkAhdQiBbSos0SwbeO0IcjzdM/sOiQeHJWA==",
+ "version": "4.2.272",
+ "resolved": "https://registry.npmjs.org/mintlify/-/mintlify-4.2.272.tgz",
+ "integrity": "sha512-1X7oZDvqJa1LqONr/5Gy6gYIBix8Dx28Wq1VslszQk50LB1Ckym9TSw68bqpcgq1E3guVTmg/MulUDRcysiNeA==",
"license": "Elastic-2.0",
"dependencies": {
- "@mintlify/cli": "4.0.868"
+ "@mintlify/cli": "4.0.876"
},
"bin": {
"mint": "index.js",
@@ -9404,6 +9376,7 @@
}
],
"license": "MIT",
+ "peer": true,
"dependencies": {
"nanoid": "^3.3.11",
"picocolors": "^1.1.1",
@@ -9779,6 +9752,7 @@
"resolved": "https://registry.npmjs.org/react/-/react-19.2.3.tgz",
"integrity": "sha512-Ku/hhYbVjOQnXDZFv2+RibmLFGwFdeeKHFcOTlrt7xplBnya5OGn/hIRDsqDiSUcfORsDC7MPxwork8jBwsIWA==",
"license": "MIT",
+ "peer": true,
"engines": {
"node": ">=0.10.0"
}
@@ -9801,8 +9775,6 @@
"resolved": "https://registry.npmjs.org/scheduler/-/scheduler-0.27.0.tgz",
"integrity": "sha512-eNv+WrVbKu1f3vbYJT/xtiF5syA5HPIMtf9IgY/nKg0sWqzAUEvqY/xm7OcZc/qafLx/iO9FgOmeSAp4v5ti/Q==",
"license": "MIT"
- "license": "MIT",
- "peer": true
},
"node_modules/react-reconciler": {
"version": "0.32.0",
@@ -9824,7 +9796,6 @@
"resolved": "https://registry.npmjs.org/react-remove-scroll/-/react-remove-scroll-2.7.2.tgz",
"integrity": "sha512-Iqb9NjCCTt6Hf+vOdNIZGdTiH1QSqr27H/Ek9sv/a97gfueI/5h1s3yRi1nngzMUaOOToin5dI1dXKdXiF+u0Q==",
"license": "MIT",
- "peer": true,
"dependencies": {
"react-remove-scroll-bar": "^2.3.7",
"react-style-singleton": "^2.2.3",
@@ -9850,7 +9821,6 @@
"resolved": "https://registry.npmjs.org/react-remove-scroll-bar/-/react-remove-scroll-bar-2.3.8.tgz",
"integrity": "sha512-9r+yi9+mgU33AKcj6IbT9oRCO78WriSj6t/cF8DWBZJ9aOGPOTEDvdUDz1FwKim7QXWwmHqtdHnRJfhAxEG46Q==",
"license": "MIT",
- "peer": true,
"dependencies": {
"react-style-singleton": "^2.2.2",
"tslib": "^2.0.0"
@@ -9872,22 +9842,19 @@
"version": "2.8.1",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
"integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
- "license": "0BSD",
- "peer": true
+ "license": "0BSD"
},
"node_modules/react-remove-scroll/node_modules/tslib": {
"version": "2.8.1",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
"integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
- "license": "0BSD",
- "peer": true
+ "license": "0BSD"
},
"node_modules/react-style-singleton": {
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/react-style-singleton/-/react-style-singleton-2.2.3.tgz",
"integrity": "sha512-b6jSvxvVnyptAiLjbkWLE/lOnR4lfTtDAl+eUC7RZy+QQWc6wRzIV2CE6xBuMmDxc2qIihtDCZD5NPOFl7fRBQ==",
"license": "MIT",
- "peer": true,
"dependencies": {
"get-nonce": "^1.0.0",
"tslib": "^2.0.0"
@@ -9909,8 +9876,7 @@
"version": "2.8.1",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
"integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
- "license": "0BSD",
- "peer": true
+ "license": "0BSD"
},
"node_modules/read-cache": {
"version": "1.0.0",
@@ -11467,6 +11433,7 @@
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz",
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
"license": "MIT",
+ "peer": true,
"engines": {
"node": ">=12"
},
@@ -11720,6 +11687,7 @@
"resolved": "https://registry.npmjs.org/unified/-/unified-11.0.5.tgz",
"integrity": "sha512-xKvGhPWw3k84Qjh8bI3ZeJjqnyadK+GEFtazSfZv/rKeTkTjOJho6mFqh2SM96iIcZokxiOpg78GazTSg8+KHA==",
"license": "MIT",
+ "peer": true,
"dependencies": {
"@types/unist": "^3.0.0",
"bail": "^2.0.0",
@@ -11946,7 +11914,6 @@
"resolved": "https://registry.npmjs.org/use-callback-ref/-/use-callback-ref-1.3.3.tgz",
"integrity": "sha512-jQL3lRnocaFtu3V00JToYz/4QkNWswxijDaCVNZRiRTO3HQDLsdu1ZtmIUvV4yPp+rvWm5j0y0TG/S61cuijTg==",
"license": "MIT",
- "peer": true,
"dependencies": {
"tslib": "^2.0.0"
},
@@ -11967,15 +11934,13 @@
"version": "2.8.1",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
"integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
- "license": "0BSD",
- "peer": true
+ "license": "0BSD"
},
"node_modules/use-sidecar": {
"version": "1.1.3",
"resolved": "https://registry.npmjs.org/use-sidecar/-/use-sidecar-1.1.3.tgz",
"integrity": "sha512-Fedw0aZvkhynoPYlA5WXrMCAMm+nSWdZt6lzJQ7Ok8S6Q+VsHmHpRWndVRJ8Be0ZbkfPc5LRYH+5XrzXcEeLRQ==",
"license": "MIT",
- "peer": true,
"dependencies": {
"detect-node-es": "^1.1.0",
"tslib": "^2.0.0"
@@ -11997,8 +11962,7 @@
"version": "2.8.1",
"resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz",
"integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==",
- "license": "0BSD",
- "peer": true
+ "license": "0BSD"
},
"node_modules/util-deprecate": {
"version": "1.0.2",
@@ -12496,6 +12460,7 @@
"resolved": "https://registry.npmjs.org/zod/-/zod-3.21.4.tgz",
"integrity": "sha512-m46AKbrzKVzOzs/DZgVnG5H55N1sv1M8qZU3A8RIKbs3mrACDNeIOeilDymVb2HdmP8uwshOCF4uJ8uM9rCqJw==",
"license": "MIT",
+ "peer": true,
"funding": {
"url": "https://github.com/sponsors/colinhacks"
}
diff --git a/package.json b/package.json
index 68a6cf0..3877b80 100644
--- a/package.json
+++ b/package.json
@@ -4,6 +4,6 @@
"links": "mintlify broken-links"
},
"dependencies": {
- "mintlify": "^4.2.264"
+ "mintlify": "^4.2.272"
}
}
diff --git a/proxy/components/overview.mdx b/proxy/components/overview.mdx
index 0e2b55b..f7b6637 100644
--- a/proxy/components/overview.mdx
+++ b/proxy/components/overview.mdx
@@ -6,7 +6,6 @@ description: Explore the Edgee components with leading technologies.
import DataCollectionCatalog from '/snippets/data-collection-catalog.mdx';
import JsGatewayCatalog from '/snippets/js-gateway-catalog.mdx';
-import EndpointCatalog from '/snippets/endpoint-catalog.mdx';
import ConsentManagementCatalog from '/snippets/consent-management-catalog.mdx';
import IdentityCatalog from '/snippets/identity-catalog.mdx';
import SecurityCatalog from '/snippets/security-catalog.mdx';
diff --git a/proxy/services/performance/caching.mdx b/proxy/services/performance/caching.mdx
index dee1bf2..87d7c40 100644
--- a/proxy/services/performance/caching.mdx
+++ b/proxy/services/performance/caching.mdx
@@ -89,13 +89,3 @@ To purge cache for a specific path, click the **Purge** button and select **Purg
dark:hidden
/>
-
-### Purging Cache via API
-
-For automated cache purging, you can use the [Purge Cache API endpoint](/api-reference/caching/purge-cache). This allows you to:
-
-- Integrate cache purging into your deployment workflows
-- Automatically purge cache when content is updated
-- Purge cache programmatically from your applications
-
-The API supports purging cache for specific paths across all domains associated with your project.
\ No newline at end of file
diff --git a/sdk/go/configuration.mdx b/sdk/go/configuration.mdx
new file mode 100644
index 0000000..b9de37d
--- /dev/null
+++ b/sdk/go/configuration.mdx
@@ -0,0 +1,193 @@
+---
+title: Go SDK Configuration
+sidebarTitle: Configuration
+description: Learn how to configure and instantiate the Edgee Go SDK.
+icon: settings-2
+---
+
+The Edgee Go SDK provides flexible ways to instantiate a client. All methods support automatic fallback to environment variables if configuration is not fully provided.
+
+## Overview
+
+The `NewClient` function accepts multiple input types:
+
+- `nil` - reads from environment variables
+- `string` - API key string
+- `*Config` - Configuration struct (type-safe)
+- `map[string]interface{}` - Plain map (flexible)
+
+## Method 1: Environment Variables (Recommended for Production)
+
+The simplest and most secure approach is to use environment variables. The SDK will automatically read `EDGEE_API_KEY` and optionally `EDGEE_BASE_URL`.
+
+```go
+import "github.com/edgee-cloud/go-sdk/edgee"
+
+func main() {
+ // Reads from EDGEE_API_KEY and EDGEE_BASE_URL environment variables
+ client, err := edgee.NewClient(nil)
+ if err != nil {
+ log.Fatal(err)
+ }
+}
+```
+
+## Method 2: String API Key (Quick Start)
+
+For quick testing or simple scripts, pass the API key directly as a string:
+
+```go
+import "github.com/edgee-cloud/go-sdk/edgee"
+
+// API key only (uses default base URL: https://api.edgee.ai)
+client, err := edgee.NewClient("your-api-key")
+if err != nil {
+ log.Fatal(err)
+}
+```
+
+**Note**: This method uses the default base URL (`https://api.edgee.ai`). To use a custom base URL, use Method 3.
+
+## Method 3: Configuration Struct (Type-Safe)
+
+For better type safety and IDE support, use the `Config` struct:
+
+```go
+import "github.com/edgee-cloud/go-sdk/edgee"
+
+// Full configuration
+client, err := edgee.NewClient(&edgee.Config{
+ APIKey: "your-api-key",
+ BaseURL: "https://api.edgee.ai", // optional, defaults to https://api.edgee.ai
+})
+if err != nil {
+ log.Fatal(err)
+}
+```
+
+## Configuration Priority
+
+The SDK uses the following priority order when resolving configuration:
+
+1. **Constructor argument** (if provided)
+2. **Environment variable** (if constructor argument is missing)
+3. **Default value** (for `base_url` only, defaults to `https://api.edgee.ai`)
+
+## Complete Examples
+
+### Example 1: Production Setup
+
+```go
+// .env file
+// EDGEE_API_KEY=prod-api-key
+// EDGEE_BASE_URL=https://api.edgee.ai
+
+package main
+
+import (
+ "log"
+ "github.com/joho/godotenv"
+ "github.com/edgee-cloud/go-sdk/edgee"
+)
+
+func main() {
+ godotenv.Load()
+ client, err := edgee.NewClient(nil)
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Use client
+}
+```
+
+### Example 2: Multi-Environment Setup
+
+```go
+package main
+
+import (
+ "log"
+ "os"
+ "github.com/edgee-cloud/go-sdk/edgee"
+)
+
+func createClient() (*edgee.Client, error) {
+ env := os.Getenv("ENVIRONMENT")
+ if env == "" {
+ env = "development"
+ }
+
+ switch env {
+ case "production":
+ return edgee.NewClient(nil) // Use environment variables
+ case "staging":
+ return edgee.NewClient(&edgee.Config{
+ APIKey: os.Getenv("EDGEE_API_KEY"),
+ })
+ default:
+ return edgee.NewClient(&edgee.Config{
+ APIKey: "dev-api-key",
+ BaseURL: "https://au.api.edgee.ai",
+ })
+ }
+}
+
+func main() {
+ client, err := createClient()
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Use client
+}
+```
+
+
+## Troubleshooting
+
+### "EDGEE_API_KEY is not set" Error
+
+**Problem**: The SDK can't find your API key.
+
+**Solutions**:
+1. Set the environment variable:
+ ```bash
+ export EDGEE_API_KEY="your-api-key"
+ ```
+
+2. Pass it directly:
+ ```go
+ client, err := edgee.NewClient("your-api-key")
+ ```
+
+3. Use Config struct:
+ ```go
+ client, err := edgee.NewClient(&edgee.Config{
+ APIKey: "your-api-key",
+ })
+ ```
+
+### Custom Base URL Not Working
+
+**Problem**: Your custom base URL isn't being used.
+
+**Check**:
+1. Verify the base URL in your configuration
+2. Check if environment variable `EDGEE_BASE_URL` is overriding it
+3. Ensure you're using the correct configuration method
+
+```go
+// This will use the BaseURL from Config struct
+client, err := edgee.NewClient(&edgee.Config{
+ APIKey: "key",
+ BaseURL: "https://custom.example.com",
+})
+
+// This will use EDGEE_BASE_URL env var if set, otherwise default
+client, err := edgee.NewClient("key")
+```
+
+## Related Documentation
+
+- [Go SDK Overview](/sdk/go) - Complete SDK documentation
+- [API Reference](/api-reference) - REST API documentation
+- [Quickstart Guide](/quickstart) - Get started with Edgee
diff --git a/sdk/go/index.mdx b/sdk/go/index.mdx
index 43803e2..8a015a3 100644
--- a/sdk/go/index.mdx
+++ b/sdk/go/index.mdx
@@ -1,8 +1,8 @@
---
-title: Golang SDK
-sidebarTitle: Go
-description: Integrate the Golang SDK in your application.
-icon: golang
+title: Go SDK
+sidebarTitle: Introduction
+description: Integrate the Go SDK in your application.
+icon: minus
---
The Edgee Go SDK provides a lightweight, type-safe interface to interact with the Edgee AI Gateway. It supports OpenAI-compatible chat completions, function calling, and streaming.
@@ -25,12 +25,9 @@ import (
)
func main() {
- client, err := edgee.NewClient(nil)
- if err != nil {
- log.Fatal(err)
- }
+ client, _ := edgee.NewClient("your-api-key")
- response, err := client.ChatCompletion("gpt-4o", "What is the capital of France?")
+ response, err := client.Send("gpt-4o", "What is the capital of France?")
if err != nil {
log.Fatal(err)
}
@@ -40,470 +37,9 @@ func main() {
}
```
-## Configuration
-
-The SDK can be configured in multiple ways:
-
-### Using Environment Variables
-
-```go
-// Set EDGEE_API_KEY environment variable
-client, err := edgee.NewClient(nil)
-```
-
-### Using Constructor Parameters
-
-```go
-// String API key
-client, err := edgee.NewClient("your-api-key")
-
-// Configuration struct
-client, err := edgee.NewClient(&edgee.Config{
- APIKey: "your-api-key",
- BaseURL: "https://api.edgee.ai", // optional, defaults to https://api.edgee.ai
-})
-
-// Map configuration
-client, err := edgee.NewClient(map[string]interface{}{
- "api_key": "your-api-key",
- "base_url": "https://api.edgee.ai",
-})
-```
-
-## Usage Examples
-
-### Simple String Input
-
-The simplest way to send a request is with a string input:
-
-```go
-response, err := client.ChatCompletion("gpt-4o", "Explain quantum computing in simple terms.")
-if err != nil {
- log.Fatal(err)
-}
-
-fmt.Println(response.Text())
-```
-
-### Full Message Array
-
-For more control, use a full message array:
-
-```go
-response, err := client.ChatCompletion("gpt-4o", map[string]interface{}{
- "messages": []map[string]string{
- {"role": "system", "content": "You are a helpful assistant."},
- {"role": "user", "content": "Hello!"},
- },
-})
-if err != nil {
- log.Fatal(err)
-}
-
-fmt.Println(response.Text())
-```
-
-### Using InputObject
-
-For better type safety, use the `InputObject` struct:
-
-```go
-response, err := client.ChatCompletion("gpt-4o", edgee.InputObject{
- Messages: []edgee.Message{
- {Role: "system", Content: "You are a helpful assistant."},
- {Role: "user", Content: "Hello!"},
- },
-})
-if err != nil {
- log.Fatal(err)
-}
-
-fmt.Println(response.Text())
-```
-
-### Function Calling (Tools)
-
-The SDK supports OpenAI-compatible function calling:
-
-```go
-response, err := client.ChatCompletion("gpt-4o", map[string]interface{}{
- "messages": []map[string]string{
- {"role": "user", "content": "What is the weather in Paris?"},
- },
- "tools": []map[string]interface{}{
- {
- "type": "function",
- "function": map[string]interface{}{
- "name": "get_weather",
- "description": "Get the current weather for a location",
- "parameters": map[string]interface{}{
- "type": "object",
- "properties": map[string]interface{}{
- "location": map[string]string{
- "type": "string",
- "description": "City name",
- },
- },
- "required": []string{"location"},
- },
- },
- },
- },
- "tool_choice": "auto", // or "none", or map[string]interface{}{"type": "function", "function": map[string]string{"name": "get_weather"}}
-})
-
-// Check if the model wants to call a function
-if toolCalls := response.ToolCalls(); len(toolCalls) > 0 {
- toolCall := toolCalls[0]
- fmt.Printf("Function: %s\n", toolCall.Function.Name)
- fmt.Printf("Arguments: %s\n", toolCall.Function.Arguments)
-}
-```
-
-### Tool Response Handling
-
-After receiving a tool call, you can send the function result back:
-
-```go
-import "encoding/json"
-
-// First request - model requests a tool call
-response1, err := client.ChatCompletion("gpt-4o", map[string]interface{}{
- "messages": []map[string]string{
- {"role": "user", "content": "What is the weather in Paris?"},
- },
- "tools": []map[string]interface{}{...}, // tool definitions
-})
-
-// Execute the function and send the result
-toolCall := response1.ToolCalls()[0]
-var args map[string]interface{}
-json.Unmarshal([]byte(toolCall.Function.Arguments), &args)
-functionResult := getWeather(args)
-
-// Second request - include tool response
-resultJSON, _ := json.Marshal(functionResult)
-toolCallID := toolCall.ID
-
-response2, err := client.ChatCompletion("gpt-4o", edgee.InputObject{
- Messages: []edgee.Message{
- {Role: "user", Content: "What is the weather in Paris?"},
- *response1.MessageContent(), // Include the assistant's message
- {
- Role: "tool",
- ToolCallID: &toolCallID,
- Content: string(resultJSON),
- },
- },
-})
-
-fmt.Println(response2.Text())
-```
-
-## Streaming
-
-The SDK supports streaming responses for real-time output. Use streaming when you want to display tokens as they're generated.
-
-Use `Stream()` to access full chunk metadata:
-
-```go
-// Stream full chunks with metadata
-chunkChan, errChan := client.Stream("gpt-4o", "Explain quantum computing")
-
-for {
- select {
- case chunk, ok := <-chunkChan:
- if !ok {
- // Stream finished
- return
- }
-
- // First chunk contains the role
- if role := chunk.Role(); role != "" {
- fmt.Printf("Role: %s\n", role)
- }
-
- // Content chunks
- if text := chunk.Text(); text != "" {
- fmt.Print(text)
- }
-
- // Last chunk contains finish reason
- if finishReason := chunk.FinishReason(); finishReason != "" {
- fmt.Printf("\nFinish reason: %s\n", finishReason)
- }
-
- case err := <-errChan:
- if err != nil {
- log.Fatal(err)
- }
- }
-}
-```
-
-### Streaming with Messages
-
-Streaming works with full message arrays too:
-
-```go
-chunkChan, errChan := client.Stream("gpt-4o", edgee.InputObject{
- Messages: []edgee.Message{
- {Role: "system", Content: "You are a helpful assistant."},
- {Role: "user", Content: "Write a poem about coding"},
- },
-})
-
-for {
- select {
- case chunk, ok := <-chunkChan:
- if !ok {
- return
- }
- if text := chunk.Text(); text != "" {
- fmt.Print(text)
- }
- case err := <-errChan:
- if err != nil {
- log.Fatal(err)
- }
- }
-}
-```
-
-### Using Send() with stream Parameter
-
-You can also use the `Send()` method with `stream=true`:
-
-```go
-// Returns streaming channels
-result, err := client.Send("gpt-4o", "Tell me a story", true)
-if err != nil {
- log.Fatal(err)
-}
-
-// Type assertion to get the channels
-streamResult := result.(struct {
- ChunkChan <-chan *edgee.StreamChunk
- ErrChan <-chan error
-})
-
-for {
- select {
- case chunk, ok := <-streamResult.ChunkChan:
- if !ok {
- return
- }
- if text := chunk.Text(); text != "" {
- fmt.Print(text)
- }
- case err := <-streamResult.ErrChan:
- if err != nil {
- log.Fatal(err)
- }
- }
-}
-```
-
-### Streaming Response Types
-
-Streaming uses different response types:
-
-```go
-// StreamChunk - returned via channels from Stream()
-type StreamChunk struct {
- ID string
- Object string
- Created int64
- Model string
- Choices []ChatCompletionChoice
-}
-
-// Convenience methods
-func (c *StreamChunk) Text() string // Get content from first choice
-func (c *StreamChunk) Role() string // Get role from first choice
-func (c *StreamChunk) FinishReason() string // Get finish_reason from first choice
-
-type ChatCompletionChoice struct {
- Index int
- Delta *ChatCompletionDelta
- FinishReason *string
-}
-
-type ChatCompletionDelta struct {
- Role *string
- Content *string
- ToolCalls []ToolCall
-}
-```
-
-### Convenience Methods
-
-Both `SendResponse` and `StreamChunk` have convenience methods for easier access:
-
-```go
-// Non-streaming response
-response, _ := client.ChatCompletion("gpt-4o", "Hello")
-fmt.Println(response.Text()) // Instead of response.Choices[0].Message.Content
-fmt.Println(response.FinishReason()) // Instead of *response.Choices[0].FinishReason
-fmt.Println(response.ToolCalls()) // Instead of response.Choices[0].Message.ToolCalls
-
-// Streaming response
-for chunk := range chunkChan {
- fmt.Println(chunk.Text()) // Instead of *chunk.Choices[0].Delta.Content
- fmt.Println(chunk.Role()) // Instead of *chunk.Choices[0].Delta.Role
- fmt.Println(chunk.FinishReason()) // Instead of *chunk.Choices[0].FinishReason
-}
-```
-
-## Response Structure
-
-The `ChatCompletion` method returns a `SendResponse` object:
-
-```go
-type SendResponse struct {
- ID string
- Object string
- Created int64
- Model string
- Choices []ChatCompletionChoice
- Usage *Usage
-}
-
-type ChatCompletionChoice struct {
- Index int
- Message *Message
- FinishReason *string
-}
-
-type Message struct {
- Role string
- Content string
- Name *string
- ToolCalls []ToolCall
- ToolCallID *string
-}
-
-type Usage struct {
- PromptTokens int
- CompletionTokens int
- TotalTokens int
-}
-```
-
-### Accessing Response Data
-
-```go
-response, err := client.ChatCompletion("gpt-4o", "Hello!")
-if err != nil {
- log.Fatal(err)
-}
-
-// Get the first choice's content
-content := response.Text()
-
-// Check finish reason
-finishReason := response.FinishReason() // "stop", "length", "tool_calls", etc.
-
-// Access token usage
-if response.Usage != nil {
- fmt.Printf("Tokens used: %d\n", response.Usage.TotalTokens)
- fmt.Printf("Prompt tokens: %d\n", response.Usage.PromptTokens)
- fmt.Printf("Completion tokens: %d\n", response.Usage.CompletionTokens)
-}
-```
-
-## Type Definitions
-
-The SDK exports Go types for all request and response objects:
-
-```go
-import "github.com/edgee-cloud/go-sdk/edgee"
-
-// Main types
-type Client struct { ... }
-type Config struct { ... }
-type InputObject struct { ... }
-type Message struct { ... }
-type Tool struct { ... }
-type FunctionDefinition struct { ... }
-type ToolCall struct { ... }
-type SendResponse struct { ... }
-type StreamChunk struct { ... }
-type Usage struct { ... }
-```
-
-### Message Types
-
-```go
-type Message struct {
- Role string `json:"role"`
- Content string `json:"content,omitempty"`
- Name *string `json:"name,omitempty"`
- ToolCalls []ToolCall `json:"tool_calls,omitempty"`
- ToolCallID *string `json:"tool_call_id,omitempty"`
-}
-```
-
-### Tool Types
-
-```go
-type FunctionDefinition struct {
- Name string `json:"name"`
- Description *string `json:"description,omitempty"`
- Parameters map[string]interface{} `json:"parameters,omitempty"`
-}
-
-type Tool struct {
- Type string `json:"type"`
- Function FunctionDefinition `json:"function"`
-}
-
-type ToolCall struct {
- ID string `json:"id"`
- Type string `json:"type"`
- Function FunctionCall `json:"function"`
-}
-
-type FunctionCall struct {
- Name string `json:"name"`
- Arguments string `json:"arguments"`
-}
-```
-
-## Error Handling
-
-The SDK returns errors for common issues:
-
-```go
-import "github.com/edgee-cloud/go-sdk/edgee"
-
-// Configuration error
-client, err := edgee.NewClient(nil)
-if err != nil {
- log.Fatalf("Configuration error: %v", err)
-}
-
-// Request error
-response, err := client.ChatCompletion("gpt-4o", "Hello!")
-if err != nil {
- log.Fatalf("Request failed: %v", err)
- // Handle API errors, network errors, etc.
-}
-```
-
## What's Next?
-
-
- Explore the full REST API documentation.
-
-
- Browse 200+ models available through Edgee.
-
-
- Learn about intelligent routing, observability, and privacy controls.
-
-
- Get started with Edgee in minutes.
-
-
+- **[Configuration](/sdk/go/configuration)** - Learn how to configure and instantiate the SDK
+- **[Send Method](/sdk/go/send)** - Complete guide to the `Send()` method
+- **[Stream Method](/sdk/go/stream)** - Learn how to stream responses
+- **[Tools](/sdk/go/tools)** - Detailed guide to function calling
diff --git a/sdk/go/send.mdx b/sdk/go/send.mdx
new file mode 100644
index 0000000..279ba2d
--- /dev/null
+++ b/sdk/go/send.mdx
@@ -0,0 +1,257 @@
+---
+title: Go SDK - Send Method
+sidebarTitle: Send
+description: Complete guide to the Send() method in the Go SDK.
+icon: send
+---
+
+The `Send()` method is used to make chat completion requests to the Edgee AI Gateway.
+
+## Arguments
+
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| `model` | `string` | The model identifier to use (e.g., `"openai/gpt-4o"`) |
+| `input` | `any` | The input for the completion. Can be a `string`, `InputObject`, `*InputObject`, or `map[string]interface{}` |
+
+### Input Types
+
+The `Send()` method accept multiple input types:
+
+#### String Input
+
+When `input` is a string, it's automatically converted to a user message:
+
+```go
+response, err := client.Send("gpt-4o", "What is the capital of France?")
+if err != nil {
+ log.Fatal(err)
+}
+
+// Equivalent to: input: InputObject{Messages: []Message{{Role: "user", Content: "What is the capital of France?"}}}
+fmt.Println(response.Text())
+// "The capital of France is Paris."
+```
+
+#### InputObject
+
+When `input` is an `InputObject`, you have full control over the conversation:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `Messages` | `[]Message` | Array of conversation messages |
+| `Tools` | `[]Tool` | Array of function tools available to the model |
+| `ToolChoice` | `any` | Controls which tool (if any) the model should call. Can be `string` (`"auto"`, `"none"`) or `map[string]interface{}`. See [Tools documentation](/sdk/go/tools) for details |
+
+**Example with InputObject:**
+
+```go
+import "github.com/edgee-cloud/go-sdk/edgee"
+
+input := edgee.InputObject{
+ Messages: []edgee.Message{
+ {Role: "user", Content: "What is 2+2?"},
+ },
+}
+
+response, err := client.Send("gpt-4o", input)
+if err != nil {
+ log.Fatal(err)
+}
+
+fmt.Println(response.Text())
+// "2+2 equals 4."
+```
+
+#### Map Input
+
+You can also use a `map[string]interface{}` for dynamic input:
+
+```go
+input := map[string]interface{}{
+ "messages": []map[string]string{
+ {"role": "user", "content": "What is 2+2?"},
+ },
+}
+
+response, err := client.Send("gpt-4o", input)
+if err != nil {
+ log.Fatal(err)
+}
+```
+
+### Message Object
+
+Each message in the `Messages` array has the following structure:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `Role` | `string` | The role of the message sender: `"system"`, `"developer"`, `"user"`, `"assistant"`, or `"tool"` |
+| `Content` | `string` | The message content. Required for `system`, `user`, `tool` and `developer` roles. Optional for `assistant` when `ToolCalls` is present |
+| `Name` | `*string` | Optional name for the message sender |
+| `ToolCalls` | `[]ToolCall` | Array of tool calls made by the assistant. Only present in `assistant` messages |
+| `ToolCallID` | `*string` | ID of the tool call this message is responding to. Required for `tool` role messages |
+
+### Message Roles
+
+- **`system`**: System instructions that set the behavior of the assistant
+- **`developer`**: Instructions provided by the application developer, prioritized ahead of user messages.
+- **`user`**: Instructions provided by an end user.
+- **`assistant`**: Assistant responses (can include `ToolCalls`)
+- **`tool`**: Results from tool/function calls (requires `ToolCallID`)
+
+**Example - System and User Messages:**
+
+```go
+input := edgee.InputObject{
+ Messages: []edgee.Message{
+ {Role: "system", Content: "You are a helpful assistant."},
+ {Role: "user", Content: "What is 2+2?"},
+ {Role: "assistant", Content: "2+2 equals 4."},
+ {Role: "user", Content: "What about 3+3?"},
+ },
+}
+
+response, err := client.Send("gpt-4o", input)
+if err != nil {
+ log.Fatal(err)
+}
+
+fmt.Println(response.Text())
+// "3+3 equals 6."
+```
+
+For complete tool calling examples and best practices, see [Tools documentation](/sdk/go/tools).
+
+## Return Value
+
+The `Send()` method returns `(SendResponse, error)`. On success, the `SendResponse` contains:
+
+### SendResponse Object
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `ID` | `string` | Unique identifier for the completion |
+| `Object` | `string` | Object type (typically `"chat.completion"`) |
+| `Created` | `int64` | Unix timestamp of when the completion was created |
+| `Model` | `string` | Model identifier used for the completion |
+| `Choices` | `[]Choice` | Array of completion choices (typically one) |
+| `Usage` | `*Usage` | Token usage information (if provided by the API) |
+
+### Choice Object
+
+Each choice in the `Choices` array contains:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `Index` | `int` | The index of this choice in the array |
+| `Message` | `*Message` | The assistant's message response |
+| `FinishReason` | `*string` | Reason why the generation stopped. Possible values: `"stop"`, `"length"`, `"tool_calls"`, `"content_filter"`, or `nil` |
+
+**Example - Handling Multiple Choices:**
+
+```go
+response, err := client.Send("gpt-4o", "Give me a creative idea.")
+if err != nil {
+ log.Fatal(err)
+}
+
+// Process all choices
+for _, choice := range response.Choices {
+ fmt.Printf("Choice %d: %s\n", choice.Index, choice.Message.Content)
+ if choice.FinishReason != nil {
+ fmt.Printf("Finish reason: %s\n", *choice.FinishReason)
+ }
+}
+```
+
+### Message Object (in Response)
+
+The `Message` in each choice has:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `Role` | `string` | The role of the message (typically `"assistant"`) |
+| `Content` | `string` | The text content of the response. Empty when `ToolCalls` is present |
+| `ToolCalls` | `[]ToolCall` | Array of tool calls requested by the model (if any). See [Tools documentation](/sdk/go/tools) for details |
+
+### Usage Object
+
+Token usage information (when available):
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `PromptTokens` | `int` | Number of tokens in the prompt |
+| `CompletionTokens` | `int` | Number of tokens in the completion |
+| `TotalTokens` | `int` | Total tokens used (prompt + completion) |
+
+**Example - Accessing Token Usage:**
+
+```go
+response, err := client.Send("gpt-4o", "Explain quantum computing briefly.")
+if err != nil {
+ log.Fatal(err)
+}
+
+if response.Usage != nil {
+ fmt.Printf("Prompt tokens: %d\n", response.Usage.PromptTokens)
+ fmt.Printf("Completion tokens: %d\n", response.Usage.CompletionTokens)
+ fmt.Printf("Total tokens: %d\n", response.Usage.TotalTokens)
+}
+```
+
+## Convenience Methods
+
+The `SendResponse` struct provides convenience methods for easier access:
+
+| Method | Return Type | Description |
+|--------|-------------|-------------|
+| `Text()` | `string` | Shortcut to `Choices[0].Message.Content` |
+| `MessageContent()` | `*Message` | Shortcut to `Choices[0].Message` |
+| `FinishReason()` | `string` | Shortcut to `*Choices[0].FinishReason` (returns empty string if nil) |
+| `ToolCalls()` | `[]ToolCall` | Shortcut to `Choices[0].Message.ToolCalls` |
+
+**Example - Using Convenience Methods:**
+
+```go
+response, err := client.Send("gpt-4o", "Hello!")
+if err != nil {
+ log.Fatal(err)
+}
+
+// Instead of: response.Choices[0].Message.Content
+fmt.Println(response.Text())
+
+// Instead of: response.Choices[0].Message
+if msg := response.MessageContent(); msg != nil {
+ fmt.Printf("Role: %s\n", msg.Role)
+}
+
+// Instead of: *response.Choices[0].FinishReason
+fmt.Println(response.FinishReason())
+
+// Instead of: response.Choices[0].Message.ToolCalls
+if toolCalls := response.ToolCalls(); len(toolCalls) > 0 {
+ fmt.Printf("Tool calls: %+v\n", toolCalls)
+}
+```
+
+
+## Error Handling
+
+The `Send()` method return Go errors:
+
+```go
+response, err := client.Send("gpt-4o", "Hello!")
+if err != nil {
+ // Handle errors
+ log.Fatalf("Request failed: %v", err)
+}
+```
+
+### Common Errors
+
+- **API errors**: `fmt.Errorf("API error %d: %s", statusCode, message)` - The API returned an error status
+- **Network errors**: Standard Go HTTP errors
+- **Invalid input**: `fmt.Errorf("unsupported input type: %T", input)` - Invalid request structure
+- **JSON errors**: Errors from JSON marshaling/unmarshaling
diff --git a/sdk/go/stream.mdx b/sdk/go/stream.mdx
new file mode 100644
index 0000000..e2dd564
--- /dev/null
+++ b/sdk/go/stream.mdx
@@ -0,0 +1,300 @@
+---
+title: Go SDK - Stream Method
+sidebarTitle: Stream
+description: Complete guide to the stream() method in the Go SDK.
+icon: square-stack
+---
+
+The `Stream()` method is used to make streaming chat completion requests to the Edgee AI Gateway. It returns two channels: one for `StreamChunk` objects and one for errors.
+
+## Arguments
+
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| `model` | `string` | The model identifier to use (e.g., `"openai/gpt-4o"`) |
+| `input` | `any` | The input for the completion. Can be a `string`, `InputObject`, `*InputObject`, or `map[string]interface{}` |
+
+### Input Types
+
+The `Stream()` method accepts the same input types as `Send()`:
+
+#### String Input
+
+When `input` is a string, it's automatically converted to a user message:
+
+```go
+chunkChan, errChan := client.Stream("gpt-4o", "Tell me a story")
+
+for {
+ select {
+ case chunk, ok := <-chunkChan:
+ if !ok {
+ return
+ }
+ if text := chunk.Text(); text != "" {
+ fmt.Print(text)
+ }
+
+ if reason := chunk.FinishReason(); reason != "" {
+ fmt.Printf("\nFinished: %s\n", reason)
+ }
+ case err := <-errChan:
+ if err != nil {
+ log.Fatal(err)
+ }
+ }
+}
+// Equivalent to: input: InputObject{Messages: []Message{{Role: "user", Content: "Tell me a story"}}}
+```
+
+#### InputObject or Map
+
+When `input` is an `InputObject` or `map[string]interface{}`, you have full control over the conversation:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `Messages` | `[]Message` | Array of conversation messages |
+| `Tools` | `[]Tool` | Array of function tools available to the model |
+| `ToolChoice` | `any` | Controls which tool (if any) the model should call. See [Tools documentation](/sdk/go/tools) for details |
+
+For details about `Message` type, see the [Send Method documentation](/sdk/go/send#message-object).
+For details about `Tool` and `ToolChoice` types, see the [Tools documentation](/sdk/go/tools).
+
+**Example - Streaming with Messages:**
+
+```go
+import "github.com/edgee-cloud/go-sdk/edgee"
+
+input := edgee.InputObject{
+ Messages: []edgee.Message{
+ {Role: "system", Content: "You are a helpful assistant."},
+ {Role: "user", Content: "Write a poem about coding"},
+ },
+}
+
+chunkChan, errChan := client.Stream("gpt-4o", input)
+
+for {
+ select {
+ case chunk, ok := <-chunkChan:
+ if !ok {
+ return
+ }
+ if text := chunk.Text(); text != "" {
+ fmt.Print(text)
+ }
+ case err := <-errChan:
+ if err != nil {
+ log.Fatal(err)
+ }
+ }
+}
+```
+
+## Return Value
+
+The `Stream()` method returns two channels:
+
+1. **`<-chan *StreamChunk`**: Channel that receives streaming chunks
+2. **`<-chan error`**: Channel that receives errors
+
+### StreamChunk Object
+
+Each chunk received from the channel has the following structure:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `ID` | `string` | Unique identifier for the completion |
+| `Object` | `string` | Object type (typically `"chat.completion.chunk"`) |
+| `Created` | `int64` | Unix timestamp of when the chunk was created |
+| `Model` | `string` | Model identifier used for the completion |
+| `Choices` | `[]StreamChoice` | Array of streaming choices (typically one) |
+
+### StreamChoice Object
+
+Each choice in the `Choices` array contains:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `Index` | `int` | The index of this choice in the array |
+| `Delta` | `*StreamDelta` | The incremental update to the message |
+| `FinishReason` | `*string` | Reason why the generation stopped. Only present in the final chunk. Possible values: `"stop"`, `"length"`, `"tool_calls"`, `"content_filter"`, or `nil` |
+
+**Example - Handling Multiple Choices:**
+
+```go
+chunkChan, errChan := client.Stream("gpt-4o", "Give me creative ideas")
+
+for {
+ select {
+ case chunk, ok := <-chunkChan:
+ if !ok {
+ return
+ }
+ for _, choice := range chunk.Choices {
+ if choice.Delta != nil && choice.Delta.Content != nil {
+ fmt.Printf("Choice %d: %s\n", choice.Index, *choice.Delta.Content)
+ }
+ }
+ case err := <-errChan:
+ if err != nil {
+ log.Fatal(err)
+ }
+ }
+}
+```
+
+### StreamDelta Object
+
+The `Delta` object contains incremental updates:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `Role` | `*string` | The role of the message (typically `"assistant"`). Only present in the **first chunk** |
+| `Content` | `*string` | Incremental text content. Each chunk contains a portion of the full response |
+| `ToolCalls` | `[]ToolCall` | Array of tool calls (if any). See [Tools documentation](/sdk/go/tools) for details |
+
+## Convenience Methods
+
+The `StreamChunk` struct provides convenience methods for easier access:
+
+| Method | Return Type | Description |
+|--------|-------------|-------------|
+| `Text()` | `string` | Shortcut to `Choices[0].Delta.Content` - the incremental text content (returns empty string if nil) |
+| `Role()` | `string` | Shortcut to `Choices[0].Delta.Role` - the message role (first chunk only, returns empty string if nil) |
+| `FinishReason()` | `string` | Shortcut to `*Choices[0].FinishReason` - the finish reason (final chunk only, returns empty string if nil) |
+
+**Example - Using Convenience Methods:**
+
+```go
+chunkChan, errChan := client.Stream("gpt-4o", "Explain quantum computing")
+
+for {
+ select {
+ case chunk, ok := <-chunkChan:
+ if !ok {
+ return
+ }
+
+ // Content chunks
+ if text := chunk.Text(); text != "" {
+ fmt.Print(text)
+ }
+
+ // First chunk contains the role
+ if role := chunk.Role(); role != "" {
+ fmt.Printf("\nRole: %s\n", role)
+ }
+
+ // Last chunk contains finish reason
+ if reason := chunk.FinishReason(); reason != "" {
+ fmt.Printf("\nFinish reason: %s\n", reason)
+ }
+ case err := <-errChan:
+ if err != nil {
+ log.Fatal(err)
+ }
+ }
+}
+```
+
+## Understanding Streaming Behavior
+
+### Chunk Structure
+
+1. **First chunk**: Contains `Role` (typically `"assistant"`) and may contain initial `Content`
+2. **Content chunks**: Contain incremental `Content` updates
+3. **Final chunk**: Contains `FinishReason` indicating why generation stopped
+
+**Example - Collecting Full Response:**
+
+```go
+chunkChan, errChan := client.Stream("gpt-4o", "Tell me a story")
+var fullText strings.Builder
+
+for {
+ select {
+ case chunk, ok := <-chunkChan:
+ if !ok {
+ fmt.Printf("\n\nFull response (%d characters):\n", fullText.Len())
+ fmt.Println(fullText.String())
+ return
+ }
+ if text := chunk.Text(); text != "" {
+ fullText.WriteString(text)
+ fmt.Print(text) // Also display as it streams
+ }
+ case err := <-errChan:
+ if err != nil {
+ log.Fatal(err)
+ }
+ }
+}
+```
+
+### Finish Reasons
+
+| Value | Description |
+|-------|-------------|
+| `"stop"` | Model generated a complete response and stopped naturally |
+| `"length"` | Response was cut off due to token limit |
+| `"tool_calls"` | Model requested tool/function calls |
+| `"content_filter"` | Content was filtered by safety systems |
+| `""` (empty string) | Generation is still in progress (not the final chunk) |
+
+### Empty Chunks
+
+Some chunks may not contain `Content`. This is normal and can happen when:
+- The chunk only contains metadata (role, finish_reason)
+- The chunk is part of tool call processing
+- Network buffering creates empty chunks
+
+Always check for `chunk.Text()` before using it:
+
+```go
+for {
+ select {
+ case chunk, ok := <-chunkChan:
+ if !ok {
+ return
+ }
+ if text := chunk.Text(); text != "" { // ✅ Good: Check before using
+ fmt.Print(text)
+ }
+ // ❌ Bad: fmt.Print(chunk.Text()) - may print empty string
+ case err := <-errChan:
+ if err != nil {
+ log.Fatal(err)
+ }
+ }
+}
+```
+
+## Error Handling
+
+The `Stream()` method can return errors in two ways:
+
+1. **Initial error**: When creating the stream (returned immediately if the request fails)
+2. **Stream errors**: Individual errors sent through the `errChan` channel
+
+```go
+chunkChan, errChan := client.Stream("gpt-4o", "Hello!")
+
+for {
+ select {
+ case chunk, ok := <-chunkChan:
+ if !ok {
+ // Stream finished normally
+ return
+ }
+ if text := chunk.Text(); text != "" {
+ fmt.Print(text)
+ }
+ case err := <-errChan:
+ if err != nil {
+ // Handle stream errors
+ log.Fatalf("Stream error: %v", err)
+ }
+ }
+}
+```
diff --git a/sdk/go/tools.mdx b/sdk/go/tools.mdx
new file mode 100644
index 0000000..b8e00af
--- /dev/null
+++ b/sdk/go/tools.mdx
@@ -0,0 +1,618 @@
+---
+title: Go SDK - Tools (Function Calling)
+sidebarTitle: Tools
+description: Complete guide to function calling with the Go SDK.
+icon: square-function
+---
+
+The Edgee Go SDK supports OpenAI-compatible function calling (tools), allowing models to request execution of functions you define. This enables models to interact with external APIs, databases, and your application logic.
+
+## Overview
+
+Function calling works in two steps:
+
+1. **Request**: Send a request with tool definitions. The model may request to call one or more tools.
+2. **Execute & Respond**: Execute the requested functions and send the results back to the model.
+
+## Tool Definition
+
+A tool is defined using the `Tool` struct:
+
+```go
+import "github.com/edgee-cloud/go-sdk/edgee"
+
+tool := edgee.Tool{
+ Type: "function",
+ Function: edgee.FunctionDefinition{
+ Name: "function_name",
+ Description: stringPtr("Function description"),
+ Parameters: map[string]interface{}{
+ "type": "object",
+ "properties": map[string]interface{}{
+ "paramName": map[string]interface{}{
+ "type": "string",
+ "description": "Parameter description",
+ },
+ },
+ "required": []string{"paramName"},
+ },
+ },
+}
+```
+
+### FunctionDefinition
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `Name` | `string` | The name of the function (must be unique, a-z, A-Z, 0-9, _, -) |
+| `Description` | `*string` | Description of what the function does. **Highly recommended** - helps the model understand when to use it |
+| `Parameters` | `map[string]interface{}` | JSON Schema object describing the function parameters |
+
+### Parameters Schema
+
+The `Parameters` field uses JSON Schema format:
+
+```go
+parameters := map[string]interface{}{
+ "type": "object",
+ "properties": map[string]interface{}{
+ "paramName": map[string]interface{}{
+ "type": "string", // or "number", "boolean", "object", "array"
+ "description": "Parameter description",
+ },
+ },
+ "required": []string{"paramName"}, // Array of required parameter names
+}
+```
+
+**Example - Defining a Tool:**
+
+```go
+import (
+ "github.com/edgee-cloud/go-sdk/edgee"
+)
+
+function := edgee.FunctionDefinition{
+ Name: "get_weather",
+ Description: stringPtr("Get the current weather for a location"),
+ Parameters: map[string]interface{}{
+ "type": "object",
+ "properties": map[string]interface{}{
+ "location": map[string]interface{}{
+ "type": "string",
+ "description": "The city and state, e.g. San Francisco, CA",
+ },
+ "unit": map[string]interface{}{
+ "type": "string",
+ "enum": []string{"celsius", "fahrenheit"},
+ "description": "Temperature unit",
+ },
+ },
+ "required": []string{"location"},
+ },
+}
+
+input := edgee.InputObject{
+ Messages: []edgee.Message{
+ {Role: "user", Content: "What is the weather in Paris?"},
+ },
+ Tools: []edgee.Tool{
+ {Type: "function", Function: function},
+ },
+ ToolChoice: "auto",
+}
+
+response, err := client.Send("gpt-4o", input)
+if err != nil {
+ log.Fatal(err)
+}
+```
+
+## Tool Choice
+
+The `ToolChoice` parameter controls when and which tools the model should call:
+
+| Value | Type | Description |
+|-------|------|-------------|
+| `"auto"` | `string` | Let the model decide whether to call tools (default) |
+| `"none"` | `string` | Don't call any tools, even if provided |
+| `map[string]interface{}{"type": "function", "function": map[string]string{"name": "function_name"}}` | `map[string]interface{}` | Force the model to call a specific function |
+
+**Example - Force a Specific Tool:**
+
+```go
+input := edgee.InputObject{
+ Messages: []edgee.Message{
+ {Role: "user", Content: "What is the weather?"},
+ },
+ Tools: []edgee.Tool{
+ {Type: "function", Function: function},
+ },
+ ToolChoice: map[string]interface{}{
+ "type": "function",
+ "function": map[string]string{
+ "name": "get_weather",
+ },
+ },
+}
+
+response, err := client.Send("gpt-4o", input)
+// Model will always call get_weather
+```
+
+**Example - Disable Tool Calls:**
+
+```go
+input := edgee.InputObject{
+ Messages: []edgee.Message{
+ {Role: "user", Content: "What is the weather?"},
+ },
+ Tools: []edgee.Tool{
+ {Type: "function", Function: function},
+ },
+ ToolChoice: "none",
+}
+
+response, err := client.Send("gpt-4o", input)
+// Model will not call tools, even though they're available
+```
+
+## Tool Call Object Structure
+
+When the model requests a tool call, you receive a `ToolCall` object in the response:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `ID` | `string` | Unique identifier for this tool call |
+| `Type` | `string` | Type of tool call (typically `"function"`) |
+| `Function` | `FunctionCall` | Function call details |
+| `Function.Name` | `string` | Name of the function to call |
+| `Function.Arguments` | `string` | JSON string containing the function arguments |
+
+### Parsing Arguments
+
+```go
+import "encoding/json"
+
+if toolCalls := response.ToolCalls(); len(toolCalls) > 0 {
+ toolCall := toolCalls[0]
+ var args map[string]interface{}
+ if err := json.Unmarshal([]byte(toolCall.Function.Arguments), &args); err != nil {
+ log.Fatal(err)
+ }
+ // args is now a map[string]interface{}
+ fmt.Println(args["location"])
+}
+```
+
+## Complete Example
+
+Here's a complete end-to-end example with error handling:
+
+```go
+package main
+
+import (
+ "encoding/json"
+ "fmt"
+ "log"
+ "github.com/edgee-cloud/go-sdk/edgee"
+)
+
+func stringPtr(s string) *string {
+ return &s
+}
+
+func getWeather(location string, unit string) map[string]interface{} {
+ return map[string]interface{}{
+ "location": location,
+ "temperature": 15,
+ "unit": unit,
+ "condition": "sunny",
+ }
+}
+
+func main() {
+ client, err := edgee.NewClient("your-api-key")
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Define the weather function
+ function := edgee.FunctionDefinition{
+ Name: "get_weather",
+ Description: stringPtr("Get the current weather for a location"),
+ Parameters: map[string]interface{}{
+ "type": "object",
+ "properties": map[string]interface{}{
+ "location": map[string]interface{}{
+ "type": "string",
+ "description": "The city name",
+ },
+ "unit": map[string]interface{}{
+ "type": "string",
+ "enum": []string{"celsius", "fahrenheit"},
+ "description": "Temperature unit",
+ },
+ },
+ "required": []string{"location"},
+ },
+ }
+
+ // Step 1: Initial request with tools
+ input := edgee.InputObject{
+ Messages: []edgee.Message{
+ {Role: "user", Content: "What is the weather in Paris and Tokyo?"},
+ },
+ Tools: []edgee.Tool{
+ {Type: "function", Function: function},
+ },
+ ToolChoice: "auto",
+ }
+
+ response1, err := client.Send("gpt-4o", input)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Step 2: Execute all tool calls
+ messages := []edgee.Message{
+ {Role: "user", Content: "What is the weather in Paris and Tokyo?"},
+ }
+
+ // Add assistant's message
+ if msg := response1.MessageContent(); msg != nil {
+ messages = append(messages, *msg)
+ }
+
+ if toolCalls := response1.ToolCalls(); len(toolCalls) > 0 {
+ for _, toolCall := range toolCalls {
+ var args map[string]interface{}
+ if err := json.Unmarshal([]byte(toolCall.Function.Arguments), &args); err != nil {
+ log.Fatal(err)
+ }
+
+ location := args["location"].(string)
+ unit := "celsius"
+ if u, ok := args["unit"].(string); ok {
+ unit = u
+ }
+
+ result := getWeather(location, unit)
+ resultJSON, _ := json.Marshal(result)
+ toolCallID := toolCall.ID
+
+ messages = append(messages, edgee.Message{
+ Role: "tool",
+ ToolCallID: &toolCallID,
+ Content: string(resultJSON),
+ })
+ }
+ }
+
+ // Step 3: Send results back
+ input2 := edgee.InputObject{
+ Messages: messages,
+ Tools: []edgee.Tool{
+ // Keep tools available for follow-up
+ {Type: "function", Function: function},
+ },
+ }
+
+ response2, err := client.Send("gpt-4o", input2)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ fmt.Println(response2.Text())
+}
+```
+
+**Example - Multiple Tools:**
+
+You can provide multiple tools and let the model choose which ones to call:
+
+```go
+getWeatherTool := edgee.Tool{
+ Type: "function",
+ Function: getWeatherFunction,
+}
+
+sendEmailTool := edgee.Tool{
+ Type: "function",
+ Function: sendEmailFunction,
+}
+
+input := edgee.InputObject{
+ Messages: []edgee.Message{
+ {Role: "user", Content: "Get the weather in Paris and send an email about it"},
+ },
+ Tools: []edgee.Tool{getWeatherTool, sendEmailTool},
+ ToolChoice: "auto",
+}
+
+response, err := client.Send("gpt-4o", input)
+if err != nil {
+ log.Fatal(err)
+}
+```
+
+## Streaming with Tools
+
+The `Stream()` method also supports tools. For details about streaming, see the [Stream Method documentation](/sdk/go/stream).
+
+```go
+input := edgee.InputObject{
+ Messages: []edgee.Message{
+ {Role: "user", Content: "What is the weather in Paris?"},
+ },
+ Tools: []edgee.Tool{
+ {Type: "function", Function: function},
+ },
+ ToolChoice: "auto",
+}
+
+chunkChan, errChan := client.Stream("gpt-4o", input)
+
+for {
+ select {
+ case chunk, ok := <-chunkChan:
+ if !ok {
+ return
+ }
+ if text := chunk.Text(); text != "" {
+ fmt.Print(text)
+ }
+
+ // Check for tool calls in the delta
+ if len(chunk.Choices) > 0 && chunk.Choices[0].Delta != nil {
+ if toolCalls := chunk.Choices[0].Delta.ToolCalls; len(toolCalls) > 0 {
+ fmt.Printf("\nTool calls detected: %+v\n", toolCalls)
+ }
+ }
+
+ if chunk.FinishReason() == "tool_calls" {
+ fmt.Println("\nModel requested tool calls")
+ }
+ case err := <-errChan:
+ if err != nil {
+ log.Fatal(err)
+ }
+ }
+}
+```
+
+## Best Practices
+
+### 1. Always Provide Descriptions
+
+Descriptions help the model understand when to use each function:
+
+```go
+// ✅ Good
+function := edgee.FunctionDefinition{
+ Name: "get_weather",
+ Description: stringPtr("Get the current weather conditions for a specific location"),
+ Parameters: parameters,
+}
+
+// ❌ Bad
+function := edgee.FunctionDefinition{
+ Name: "get_weather",
+ Description: nil, // Missing description
+ Parameters: parameters,
+}
+```
+
+### 2. Use Clear Parameter Names
+
+```go
+// ✅ Good
+properties := map[string]interface{}{
+ "location": map[string]interface{}{
+ "type": "string",
+ "description": "The city name",
+ },
+}
+
+// ❌ Bad
+properties := map[string]interface{}{
+ "loc": map[string]interface{}{
+ "type": "string",
+ // Unclear name, no description
+ },
+}
+```
+
+### 3. Mark Required Parameters
+
+```go
+parameters := map[string]interface{}{
+ "type": "object",
+ "properties": map[string]interface{}{
+ "location": map[string]interface{}{
+ "type": "string",
+ "description": "City name",
+ },
+ "unit": map[string]interface{}{
+ "type": "string",
+ "description": "Temperature unit",
+ },
+ },
+ "required": []string{"location"}, // location is required, unit is optional
+}
+```
+
+### 4. Handle Multiple Tool Calls
+
+Models can request multiple tool calls in a single response. Use goroutines for parallel execution when possible:
+
+```go
+if toolCalls := response.ToolCalls(); len(toolCalls) > 0 {
+ type result struct {
+ toolCallID string
+ result map[string]interface{}
+ }
+ results := make(chan result, len(toolCalls))
+
+ // Execute all tool calls in parallel
+ for _, toolCall := range toolCalls {
+ go func(tc edgee.ToolCall) {
+ var args map[string]interface{}
+ json.Unmarshal([]byte(tc.Function.Arguments), &args)
+ res := executeFunction(tc.Function.Name, args)
+ results <- result{toolCallID: tc.ID, result: res}
+ }(toolCall)
+ }
+
+ // Collect results
+ for i := 0; i < len(toolCalls); i++ {
+ res := <-results
+ resultJSON, _ := json.Marshal(res.result)
+ toolCallID := res.toolCallID
+ messages = append(messages, edgee.Message{
+ Role: "tool",
+ ToolCallID: &toolCallID,
+ Content: string(resultJSON),
+ })
+ }
+}
+```
+
+**Example - Handling Multiple Tool Calls:**
+
+```go
+// Step 2: Execute all tool calls
+messages := []edgee.Message{
+ {Role: "user", Content: "What is the weather in Paris and Tokyo?"},
+}
+
+if msg := response1.MessageContent(); msg != nil {
+ messages = append(messages, *msg)
+}
+
+if toolCalls := response1.ToolCalls(); len(toolCalls) > 0 {
+ for _, toolCall := range toolCalls {
+ var args map[string]interface{}
+ json.Unmarshal([]byte(toolCall.Function.Arguments), &args)
+ result := getWeather(args["location"].(string), args["unit"].(string))
+
+ resultJSON, _ := json.Marshal(result)
+ toolCallID := toolCall.ID
+ messages = append(messages, edgee.Message{
+ Role: "tool",
+ ToolCallID: &toolCallID,
+ Content: string(resultJSON),
+ })
+ }
+}
+```
+
+### 5. Error Handling in Tool Execution
+
+```go
+if toolCalls := response.ToolCalls(); len(toolCalls) > 0 {
+ for _, toolCall := range toolCalls {
+ var args map[string]interface{}
+ if err := json.Unmarshal([]byte(toolCall.Function.Arguments), &args); err != nil {
+ log.Printf("Failed to parse arguments: %v", err)
+ continue
+ }
+
+ result, err := executeFunction(toolCall.Function.Name, args)
+ if err != nil {
+ // Send error back to model
+ errorJSON, _ := json.Marshal(map[string]interface{}{
+ "error": err.Error(),
+ })
+ toolCallID := toolCall.ID
+ messages = append(messages, edgee.Message{
+ Role: "tool",
+ ToolCallID: &toolCallID,
+ Content: string(errorJSON),
+ })
+ } else {
+ resultJSON, _ := json.Marshal(result)
+ toolCallID := toolCall.ID
+ messages = append(messages, edgee.Message{
+ Role: "tool",
+ ToolCallID: &toolCallID,
+ Content: string(resultJSON),
+ })
+ }
+ }
+}
+```
+
+### 6. Keep Tools Available
+
+Include tools in follow-up requests so the model can call them again if needed:
+
+```go
+input2 := edgee.InputObject{
+ Messages: messagesWithToolResults,
+ Tools: []edgee.Tool{
+ // Keep the same tools available
+ {Type: "function", Function: function},
+ },
+}
+
+response2, err := client.Send("gpt-4o", input2)
+```
+
+**Example - Checking for Tool Calls:**
+
+```go
+if toolCalls := response.ToolCalls(); len(toolCalls) > 0 {
+ // Model wants to call a function
+ for _, toolCall := range toolCalls {
+ fmt.Printf("Function: %s\n", toolCall.Function.Name)
+ fmt.Printf("Arguments: %s\n", toolCall.Function.Arguments)
+ }
+}
+```
+
+**Example - Executing Functions and Sending Results:**
+
+```go
+// Execute the function
+toolCalls := response.ToolCalls()
+if len(toolCalls) > 0 {
+ toolCall := toolCalls[0]
+ var args map[string]interface{}
+ json.Unmarshal([]byte(toolCall.Function.Arguments), &args)
+ weatherResult := getWeather(args["location"].(string), args["unit"].(string))
+
+ // Send the result back
+ messages := []edgee.Message{
+ {Role: "user", Content: "What is the weather in Paris?"},
+ }
+
+ // Include assistant's message with tool_calls
+ if msg := response.MessageContent(); msg != nil {
+ messages = append(messages, *msg)
+ }
+
+ resultJSON, _ := json.Marshal(weatherResult)
+ toolCallID := toolCall.ID
+ messages = append(messages, edgee.Message{
+ Role: "tool",
+ ToolCallID: &toolCallID,
+ Content: string(resultJSON),
+ })
+
+ input2 := edgee.InputObject{
+ Messages: messages,
+ Tools: []edgee.Tool{
+ {Type: "function", Function: function},
+ },
+ }
+
+ response2, err := client.Send("gpt-4o", input2)
+ if err != nil {
+ log.Fatal(err)
+ }
+ fmt.Println(response2.Text())
+ // "The weather in Paris is 15°C and sunny."
+}
+```
diff --git a/sdk/index.mdx b/sdk/index.mdx
index 6cf4ecf..7c37532 100644
--- a/sdk/index.mdx
+++ b/sdk/index.mdx
@@ -19,14 +19,15 @@ Choose your language and get started in minutes:
```typescript
import Edgee from 'edgee';
- const edgee = new Edgee(process.env.EDGEE_API_KEY);
+ const edgee = new Edgee("your-api-key");
const response = await edgee.send({
model: 'gpt-4o',
input: 'What is the capital of France?',
});
- console.log(response.choices[0].message.content);
+ console.log(response.text);
+ // "The capital of France is Paris."
```
@@ -38,14 +39,15 @@ Choose your language and get started in minutes:
```python
from edgee import Edgee
- edgee = Edgee(api_key=os.environ["EDGEE_API_KEY"])
+ edgee = Edgee("your-api-key")
response = edgee.send(
model="gpt-4o",
input="What is the capital of France?"
)
- print(response.content)
+ print(response.text)
+ # "The capital of France is Paris."
```
@@ -56,49 +58,40 @@ Choose your language and get started in minutes:
```go
package main
-
+
import (
"fmt"
- "os"
- "github.com/edgee-cloud/go-sdk"
+ "log"
+ "github.com/edgee-cloud/go-sdk/edgee"
)
-
+
func main() {
- client := edgee.NewClient(os.Getenv("EDGEE_API_KEY"))
-
- response, _ := client.Send(edgee.SendParams{
- Model: "gpt-4o",
- Input: "What is the capital of France?",
- })
-
- fmt.Println(response.Content)
+ client, _ := edgee.NewClient("your-api-key")
+
+ response, err := client.Send("gpt-4o", "What is the capital of France?")
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ fmt.Println(response.Text())
+ // "The capital of France is Paris."
}
```
- Add to `Cargo.toml`:
- ```toml
- [dependencies]
- edgee = "0.1"
- tokio = { version = "1", features = ["full"] }
+ ```bash
+ cargo install edgee
```
```rust
use edgee::Edgee;
- #[tokio::main]
- async fn main() -> Result<(), Box> {
- let client = Edgee::from_env()?;
+ let client = Edgee::with_api_key("your-api-key");
+ let response = client.send("gpt-4o", "What is the capital of France?").await.unwrap();
- let response = client.send(
- "gpt-4o",
- "What is the capital of France?"
- ).await?;
-
- println!("{}", response.text().unwrap_or(""));
- Ok(())
- }
+ println!("{}", response.text().unwrap_or(""));
+ // "The capital of France is Paris."
```
@@ -119,7 +112,7 @@ To learn more about the SDKs, see the individual SDK pages:
Lightweight, type-safe SDK for Node.js and TypeScript applications.
@@ -128,7 +121,7 @@ To learn more about the SDKs, see the individual SDK pages:
Python SDK for seamless integration with Python applications.
@@ -137,7 +130,7 @@ To learn more about the SDKs, see the individual SDK pages:
Modern async Rust SDK with compile-time safety and streaming support.
@@ -146,7 +139,7 @@ To learn more about the SDKs, see the individual SDK pages:
High-performance Go SDK for building scalable applications.
@@ -155,7 +148,7 @@ To learn more about the SDKs, see the individual SDK pages:
Use Edgee with the OpenAI SDK for Python and TypeScript.
diff --git a/sdk/python/configuration.mdx b/sdk/python/configuration.mdx
new file mode 100644
index 0000000..1a7fddd
--- /dev/null
+++ b/sdk/python/configuration.mdx
@@ -0,0 +1,146 @@
+---
+title: Python SDK Configuration
+sidebarTitle: Configuration
+description: Learn how to configure and instantiate the Edgee Python SDK.
+icon: settings-2
+---
+
+The Edgee Python SDK provides flexible ways to instantiate a client. All methods support automatic fallback to environment variables if configuration is not fully provided.
+
+## Overview
+
+The `Edgee` class constructor accepts multiple input types:
+
+- `None` or no arguments - reads from environment variables
+- `str` - API key string (backward compatible)
+- `EdgeeConfig` - Configuration dataclass (type-safe)
+- `dict` - Plain dictionary (flexible)
+
+## Method 1: Environment Variables (Recommended for Production)
+
+The simplest and most secure approach is to use environment variables.
+
+The SDK automatically reads `EDGEE_API_KEY` (required) and optionally `EDGEE_BASE_URL` from your environment variables.
+
+```python
+from edgee import Edgee
+
+# Reads from EDGEE_API_KEY and EDGEE_BASE_URL environment variables
+edgee = Edgee()
+```
+
+
+## Method 2: String API Key (Quick Start)
+
+For quick testing or simple scripts, pass the API key directly as a string:
+
+```python
+from edgee import Edgee
+
+# API key only (uses default base URL: https://api.edgee.ai)
+edgee = Edgee("your-api-key")
+```
+
+**Note**: This method uses the default base URL (`https://api.edgee.ai`). To use a custom base URL, use Method 3 or 4.
+
+## Method 3: Configuration Object (Type-Safe)
+
+For better type safety, IDE support, and code clarity, use the `EdgeeConfig` dataclass:
+
+```python
+from edgee import Edgee, EdgeeConfig
+
+# Full configuration
+edgee = Edgee(EdgeeConfig(
+ api_key="your-api-key",
+ base_url="https://api.edgee.ai" # optional, defaults to https://api.edgee.ai
+))
+```
+
+## Configuration Priority
+
+The SDK uses the following priority order when resolving configuration:
+
+1. **Constructor argument** (if provided)
+2. **Environment variable** (if constructor argument is missing)
+3. **Default value** (for `base_url` only, defaults to `https://api.edgee.ai`)
+
+
+## Complete Examples
+
+### Example 1: Production Setup
+
+```python
+# .env file
+# EDGEE_API_KEY=prod-api-key
+# EDGEE_BASE_URL=https://api.edgee.ai
+
+from dotenv import load_dotenv
+from edgee import Edgee
+
+load_dotenv()
+edgee = Edgee() # Reads from environment
+```
+
+### Example 2: Multi-Environment Setup
+
+```python
+import os
+from edgee import Edgee, EdgeeConfig
+
+ENV = os.getenv("ENVIRONMENT", "development")
+
+if ENV == "production":
+ edgee = Edgee() # Use environment variables
+elif ENV == "staging":
+ edgee = Edgee(EdgeeConfig(
+ api_key=os.getenv("EDGEE_API_KEY")
+ ))
+else:
+ edgee = Edgee(EdgeeConfig(
+ api_key="dev-api-key",
+ base_url="https://eu.api.edgee.ai"
+ ))
+```
+
+## Troubleshooting
+
+### "EDGEE_API_KEY is not set" Error
+
+**Problem**: The SDK can't find your API key.
+
+**Solutions**:
+1. Set the environment variable:
+ ```bash
+ export EDGEE_API_KEY="your-api-key"
+ ```
+
+2. Pass it directly:
+ ```python
+ edgee = Edgee("your-api-key")
+ ```
+
+3. Use EdgeeConfig:
+ ```python
+ edgee = Edgee(EdgeeConfig(api_key="your-api-key"))
+ ```
+
+### Custom Base URL Not Working
+
+**Problem**: Your custom base URL isn't being used.
+
+**Check**:
+1. Verify the base URL in your configuration
+2. Check if environment variable `EDGEE_BASE_URL` is overriding it
+3. Ensure you're using the correct configuration method
+
+```python
+# This will use the base_url from EdgeeConfig
+edgee = Edgee(EdgeeConfig(
+ api_key="key",
+ base_url="https://custom.example.com"
+))
+
+# This will use EDGEE_BASE_URL env var if set, otherwise default
+edgee = Edgee("key")
+```
diff --git a/sdk/python/index.mdx b/sdk/python/index.mdx
index 8e65890..ac190e0 100644
--- a/sdk/python/index.mdx
+++ b/sdk/python/index.mdx
@@ -1,8 +1,8 @@
---
title: Python SDK
-sidebarTitle: Python
+sidebarTitle: Introduction
description: Integrate the Python SDK in your application.
-icon: python
+icon: minus
---
The Edgee Python SDK provides a lightweight, type-safe interface to interact with the Edgee AI Gateway. It supports OpenAI-compatible chat completions, function calling, and streaming.
@@ -18,387 +18,23 @@ pip install edgee
```python
from edgee import Edgee
-edgee = Edgee()
+# Create client
+edgee = Edgee("your-api-key")
+# Send a simple request
response = edgee.send(
model="gpt-4o",
input="What is the capital of France?"
)
+# Access the response
print(response.text)
# "The capital of France is Paris."
```
-## Configuration
-
-The SDK can be configured in multiple ways:
-
-### Using Environment Variables
-
-```python
-import os
-
-# Set EDGEE_API_KEY environment variable
-edgee = Edgee()
-```
-
-### Using Constructor Parameters
-
-```python
-# String API key (backward compatible)
-edgee = Edgee("your-api-key")
-
-# Configuration object
-from edgee import EdgeeConfig
-
-edgee = Edgee(EdgeeConfig(
- api_key="your-api-key",
- base_url="https://api.edgee.ai" # optional, defaults to https://api.edgee.ai
-))
-
-# Dictionary configuration
-edgee = Edgee({
- "api_key": "your-api-key",
- "base_url": "https://api.edgee.ai"
-})
-```
-
-## Usage Examples
-
-### Simple String Input
-
-The simplest way to send a request is with a string input:
-
-```python
-response = edgee.send(
- model="gpt-4o",
- input="Explain quantum computing in simple terms."
-)
-
-print(response.text)
-```
-
-### Full Message Array
-
-For more control, use a full message array:
-
-```python
-response = edgee.send(
- model="gpt-4o",
- input={
- "messages": [
- {"role": "system", "content": "You are a helpful assistant."},
- {"role": "user", "content": "Hello!"}
- ]
- }
-)
-
-print(response.text)
-```
-
-### Function Calling (Tools)
-
-The SDK supports OpenAI-compatible function calling:
-
-```python
-response = edgee.send(
- model="gpt-4o",
- input={
- "messages": [
- {"role": "user", "content": "What is the weather in Paris?"}
- ],
- "tools": [
- {
- "type": "function",
- "function": {
- "name": "get_weather",
- "description": "Get the current weather for a location",
- "parameters": {
- "type": "object",
- "properties": {
- "location": {
- "type": "string",
- "description": "City name"
- }
- },
- "required": ["location"]
- }
- }
- }
- ],
- "tool_choice": "auto" # or "none", or {"type": "function", "function": {"name": "get_weather"}}
- }
-)
-
-# Check if the model wants to call a function
-if response.tool_calls:
- tool_call = response.tool_calls[0]
- print(f"Function: {tool_call['function']['name']}")
- print(f"Arguments: {tool_call['function']['arguments']}")
-```
-
-### Tool Response Handling
-
-After receiving a tool call, you can send the function result back:
-
-```python
-import json
-
-# First request - model requests a tool call
-response1 = edgee.send(
- model="gpt-4o",
- input={
- "messages": [{"role": "user", "content": "What is the weather in Paris?"}],
- "tools": [...] # tool definitions
- }
-)
-
-# Execute the function and send the result
-tool_call = response1.tool_calls[0]
-function_result = get_weather(json.loads(tool_call['function']['arguments']))
-
-# Second request - include tool response
-response2 = edgee.send(
- model="gpt-4o",
- input={
- "messages": [
- {"role": "user", "content": "What is the weather in Paris?"},
- response1.message, # Include the assistant's message with tool_calls
- {
- "role": "tool",
- "tool_call_id": tool_call['id'],
- "content": json.dumps(function_result)
- }
- ]
- }
-)
-
-print(response2.text)
-```
-
-## Streaming
-
-The SDK supports streaming responses for real-time output. Use streaming when you want to display tokens as they're generated.
-
-Use `stream()` to access full chunk metadata:
-
-```python
-# Stream full chunks with metadata
-for chunk in edgee.stream("gpt-4o", "Explain quantum computing"):
- # First chunk contains the role
- if chunk.role:
- print(f"Role: {chunk.role}")
-
- # Content chunks
- if chunk.text:
- print(chunk.text, end="", flush=True)
-
- # Last chunk contains finish reason
- if chunk.finish_reason:
- print(f"\nFinish reason: {chunk.finish_reason}")
-```
-
-### Streaming with Messages
-
-Streaming works with full message arrays too:
-
-```python
-for chunk in edgee.stream(
- "gpt-4o",
- {
- "messages": [
- {"role": "system", "content": "You are a helpful assistant."},
- {"role": "user", "content": "Write a poem about coding"}
- ]
- }
-):
- if chunk.text:
- print(chunk.text, end="", flush=True)
-```
-
-### Using send() with stream Parameter
-
-You can also use the `send()` method with `stream=True`:
-
-```python
-# Returns a generator instead of SendResponse
-for chunk in edgee.send("gpt-4o", "Tell me a story", stream=True):
- if chunk.text:
- print(chunk.text, end="", flush=True)
-```
-
-### Streaming Response Types
-
-Streaming uses different response types:
-
-```python
-# StreamChunk - returned by stream()
-@dataclass
-class StreamChunk:
- choices: list[StreamChoice]
-
- # Convenience properties
- text: str | None # Get content from first choice
- role: str | None # Get role from first choice
- finish_reason: str | None # Get finish_reason from first choice
-
-@dataclass
-class StreamChoice:
- index: int
- delta: StreamDelta
- finish_reason: str | None
-
-@dataclass
-class StreamDelta:
- role: str | None = None
- content: str | None = None
- tool_calls: list[dict] | None = None
-```
-
-### Convenience Properties
-
-Both `SendResponse` and `StreamChunk` have convenience properties for easier access:
-
-```python
-# Non-streaming response
-response = edgee.send("gpt-4o", "Hello")
-print(response.text) # Instead of response.choices[0].message["content"]
-print(response.finish_reason) # Instead of response.choices[0].finish_reason
-print(response.tool_calls) # Instead of response.choices[0].message.get("tool_calls")
-
-# Streaming response
-for chunk in edgee.stream("gpt-4o", "Hello"):
- print(chunk.text) # Instead of chunk.choices[0].delta.content
- print(chunk.role) # Instead of chunk.choices[0].delta.role
- print(chunk.finish_reason) # Instead of chunk.choices[0].finish_reason
-```
-
-## Response Structure
-
-The `send` method returns a `SendResponse` object:
-
-```python
-@dataclass
-class SendResponse:
- choices: list[Choice]
- usage: Usage | None = None
-
-@dataclass
-class Choice:
- index: int
- message: dict # {"role": str, "content": str, "tool_calls": list | None}
- finish_reason: str | None
-
-@dataclass
-class Usage:
- prompt_tokens: int
- completion_tokens: int
- total_tokens: int
-```
-
-### Accessing Response Data
-
-```python
-response = edgee.send("gpt-4o", "Hello!")
-
-# Get the first choice's content
-content = response.text
-
-# Check finish reason
-finish_reason = response.finish_reason # 'stop', 'length', 'tool_calls', etc.
-
-# Access token usage
-if response.usage:
- print(f"Tokens used: {response.usage.total_tokens}")
- print(f"Prompt tokens: {response.usage.prompt_tokens}")
- print(f"Completion tokens: {response.usage.completion_tokens}")
-```
-
-## Type Definitions
-
-The SDK exports Python dataclasses for all request and response objects:
-
-```python
-from edgee import (
- Edgee,
- EdgeeConfig,
- Message,
- Tool,
- FunctionDefinition,
- ToolCall,
- InputObject,
- SendResponse,
- StreamChunk,
- Choice,
- Usage
-)
-```
-
-### Message Types
-
-```python
-@dataclass
-class Message:
- role: str # "system" | "user" | "assistant" | "tool"
- content: str | None = None
- name: str | None = None
- tool_calls: list[ToolCall] | None = None
- tool_call_id: str | None = None
-```
-
-### Tool Types
-
-```python
-@dataclass
-class FunctionDefinition:
- name: str
- description: str | None = None
- parameters: dict | None = None
-
-@dataclass
-class Tool:
- type: str # "function"
- function: FunctionDefinition
-
-@dataclass
-class ToolCall:
- id: str
- type: str
- function: dict # {"name": str, "arguments": str}
-```
-
-## Error Handling
-
-The SDK raises exceptions for common issues:
-
-```python
-from edgee import Edgee
-
-try:
- edgee = Edgee() # Raises ValueError if EDGEE_API_KEY is not set
-except ValueError as error:
- print(f"Configuration error: {error}")
-
-try:
- response = edgee.send("gpt-4o", "Hello!")
-except RuntimeError as error:
- print(f"Request failed: {error}")
- # Handle API errors, network errors, etc.
-```
-
## What's Next?
-
-
- Explore the full REST API documentation.
-
-
- Browse 200+ models available through Edgee.
-
-
- Learn about intelligent routing, observability, and privacy controls.
-
-
- Get started with Edgee in minutes.
-
-
+- **[Configuration](/sdk/python/configuration)** - Learn how to configure and instantiate the SDK
+- **[Send Method](/sdk/python/send)** - Complete guide to the `send()` method
+- **[Stream Method](/sdk/python/stream)** - Learn how to stream responses
+- **[Tools](/sdk/python/tools)** - Detailed guide to function calling
diff --git a/sdk/python/send.mdx b/sdk/python/send.mdx
new file mode 100644
index 0000000..8a3a351
--- /dev/null
+++ b/sdk/python/send.mdx
@@ -0,0 +1,236 @@
+---
+title: Python SDK - Send Method
+sidebarTitle: Send
+description: Complete guide to the send() method in the Python SDK.
+icon: send
+---
+
+The `send()` method is used to make non-streaming chat completion requests to the Edgee AI Gateway. It returns a `SendResponse` object with the model's response.
+
+## Arguments
+
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| `model` | `str` | The model identifier to use (e.g., `"gpt-4o"`) |
+| `input` | `str \| InputObject \| dict` | The input for the completion. Can be a simple string or a structured `InputObject` or dictionary |
+| `stream` | `bool` | If `True`, returns a generator yielding `StreamChunk` objects. If `False` (default), returns a `SendResponse` object |
+
+### Input Types
+
+#### String Input
+
+When `input` is a string, it's automatically converted to a user message:
+
+```python
+response = edgee.send(
+ model="gpt-4o",
+ input="What is the capital of France?"
+)
+
+# Equivalent to: input={"messages": [{"role": "user", "content": "What is the capital of France?"}]}
+print(response.text)
+# "The capital of France is Paris."
+```
+
+#### InputObject or Dictionary
+
+When `input` is an `InputObject` or dictionary, you have full control over the conversation:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `messages` | `list[dict]` | Array of conversation messages |
+| `tools` | `list[dict] \| None` | Array of function tools available to the model |
+| `tool_choice` | `str \| dict \| None` | Controls which tool (if any) the model should call. See [Tools documentation](/sdk/python/tools) for details |
+
+**Example with Dictionary Input:**
+
+```python
+response = edgee.send(
+ model="gpt-4o",
+ input={
+ "messages": [
+ {"role": "user", "content": "What is 2+2?"}
+ ]
+ }
+)
+
+print(response.text)
+# "2+2 equals 4."
+```
+
+### Message Object
+
+Each message in the `messages` array has the following structure:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `role` | `str` | The role of the message sender: `"system"`, `"developer"`, `"user"`, `"assistant"`, or `"tool"` |
+| `content` | `str \| None` | The message content. Required for `system`, `user`, `tool` and `developer` roles. Optional for `assistant` when `tool_calls` is present |
+| `name` | `str \| None` | Optional name for the message sender |
+| `tool_calls` | `list[dict] \| None` | Array of tool calls made by the assistant. Only present in `assistant` messages |
+| `tool_call_id` | `str \| None` | ID of the tool call this message is responding to. Required for `tool` role messages |
+
+### Message Roles
+
+- **`system`**: System instructions that set the behavior of the assistant
+- **`developer`**: Instructions provided by the application developer, prioritized ahead of user messages.
+- **`user`**: Instructions provided by an end user.
+- **`assistant`**: Assistant responses (can include `tool_calls`)
+- **`tool`**: Results from tool/function calls (requires `tool_call_id`)
+
+**Example - System and User Messages:**
+
+```python
+response = edgee.send(
+ model="gpt-4o",
+ input={
+ "messages": [
+ {"role": "system", "content": "You are a helpful assistant."},
+ {"role": "user", "content": "What is 2+2?"},
+ {"role": "assistant", "content": "2+2 equals 4."},
+ {"role": "user", "content": "What about 3+3?"}
+ ]
+ }
+)
+
+print(response.text)
+# "3+3 equals 6."
+```
+
+For complete tool calling examples and best practices, see [Tools documentation](/sdk/python/tools).
+
+## Return Value
+
+The `send()` method returns a `SendResponse` object when `stream=False` (default):
+
+### SendResponse Object
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `choices` | `list[Choice]` | Array of completion choices (typically one) |
+| `usage` | `Usage \| None` | Token usage information (if provided by the API) |
+
+### Choice Object
+
+Each choice in the `choices` array contains:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `index` | `int` | The index of this choice in the array |
+| `message` | `dict` | The assistant's message response |
+| `finish_reason` | `str \| None` | Reason why the generation stopped. Possible values: `"stop"`, `"length"`, `"tool_calls"`, `"content_filter"`, or `None` |
+
+**Example - Handling Multiple Choices:**
+
+```python
+response = edgee.send(
+ model="gpt-4o",
+ input="Give me a creative idea."
+)
+
+# Process all choices
+for choice in response.choices:
+ print(f"Choice {choice.index}: {choice.message.get('content')}")
+ print(f"Finish reason: {choice.finish_reason}")
+```
+
+### Message Object (in Response)
+
+The `message` in each choice has:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `role` | `str` | The role of the message (typically `"assistant"`) |
+| `content` | `str \| None` | The text content of the response. `None` when `tool_calls` is present |
+| `tool_calls` | `list[dict] \| None` | Array of tool calls requested by the model (if any). See [Tools documentation](/sdk/python/tools) for details |
+
+### Usage Object
+
+Token usage information (when available):
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `prompt_tokens` | `int` | Number of tokens in the prompt |
+| `completion_tokens` | `int` | Number of tokens in the completion |
+| `total_tokens` | `int` | Total tokens used (prompt + completion) |
+
+**Example - Accessing Token Usage:**
+
+```python
+response = edgee.send(
+ model="gpt-4o",
+ input="Explain quantum computing briefly."
+)
+
+if response.usage:
+ print(f"Prompt tokens: {response.usage.prompt_tokens}")
+ print(f"Completion tokens: {response.usage.completion_tokens}")
+ print(f"Total tokens: {response.usage.total_tokens}")
+```
+
+## Convenience Properties
+
+The `SendResponse` class provides convenience properties for easier access:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `text` | `str \| None` | Shortcut to `choices[0].message["content"]` |
+| `message` | `dict \| None` | Shortcut to `choices[0].message` |
+| `finish_reason` | `str \| None` | Shortcut to `choices[0].finish_reason` |
+| `tool_calls` | `list \| None` | Shortcut to `choices[0].message.get("tool_calls")` |
+
+**Example - Using Convenience Properties:**
+
+```python
+response = edgee.send(
+ model="gpt-4o",
+ input="Hello!"
+)
+
+# Instead of: response.choices[0].message["content"]
+print(response.text)
+
+# Instead of: response.choices[0].message
+print(response.message)
+
+# Instead of: response.choices[0].finish_reason
+print(response.finish_reason)
+
+# Instead of: response.choices[0].message.get("tool_calls")
+if response.tool_calls:
+ print("Tool calls:", response.tool_calls)
+```
+
+## Streaming with send()
+
+You can use `send()` with `stream=True` to get streaming responses. This returns a generator yielding `StreamChunk` objects:
+
+```python
+for chunk in edgee.send("gpt-4o", "Tell me a story", stream=True):
+ if chunk.text:
+ print(chunk.text, end="", flush=True)
+```
+
+For more details about streaming, see the [Stream Method documentation](/sdk/python/stream).
+
+## Error Handling
+
+The `send()` method can raise exceptions in several scenarios:
+
+```python
+try:
+ response = edgee.send(
+ model="gpt-4o",
+ input="Hello!"
+ )
+except RuntimeError as error:
+ # API errors: "API error {status}: {message}"
+ # Network errors: Standard HTTP errors
+ print(f"Request failed: {error}")
+```
+
+### Common Errors
+
+- **API errors**: `RuntimeError: API error {status}: {message}` - The API returned an error status
+- **Network errors**: Standard HTTP errors from `urllib`
+- **Invalid input**: Errors from invalid request structure
diff --git a/sdk/python/stream.mdx b/sdk/python/stream.mdx
new file mode 100644
index 0000000..16d1b55
--- /dev/null
+++ b/sdk/python/stream.mdx
@@ -0,0 +1,200 @@
+---
+title: Python SDK - Stream Method
+sidebarTitle: Stream
+description: Complete guide to the stream() method in the Python SDK.
+icon: square-stack
+---
+
+The `stream()` method is used to make streaming chat completion requests to the Edgee AI Gateway. It returns a generator that yields `StreamChunk` objects as they arrive from the API.
+
+## Arguments
+
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| `model` | `str` | The model identifier to use (e.g., `"gpt-4o"`) |
+| `input` | `str \| InputObject \| dict` | The input for the completion. Can be a simple string or a structured `InputObject` or dictionary |
+
+### Input Types
+
+#### String Input
+
+When `input` is a string, it's automatically converted to a user message:
+
+```python
+for chunk in edgee.stream("gpt-4o", "Tell me a story"):
+ if chunk.text:
+ print(chunk.text, end="", flush=True)
+
+ if chunk.finish_reason:
+ print(f"\nFinished: {chunk.finish_reason}")
+# Equivalent to: input={"messages": [{"role": "user", "content": "Tell me a story"}]}
+```
+
+#### InputObject or Dictionary
+
+When `input` is an `InputObject` or dictionary, you have full control over the conversation:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `messages` | `list[dict]` | Array of conversation messages |
+| `tools` | `list[dict] \| None` | Array of function tools available to the model |
+| `tool_choice` | `str \| dict \| None` | Controls which tool (if any) the model should call. See [Tools documentation](/sdk/python/tools) for details |
+
+For details about `Message` type, see the [Send Method documentation](/sdk/python/send#message-object).
+For details about `Tool` and `ToolChoice` types, see the [Tools documentation](/sdk/python/tools).
+
+**Example - Streaming with Messages:**
+
+```python
+for chunk in edgee.stream("gpt-4o", {
+ "messages": [
+ {"role": "system", "content": "You are a helpful assistant."},
+ {"role": "user", "content": "Write a poem about coding"}
+ ]
+}):
+ if chunk.text:
+ print(chunk.text, end="", flush=True)
+```
+
+## Return Value
+
+The `stream()` method returns a generator that yields `StreamChunk` objects. Each chunk contains incremental updates to the response.
+
+### StreamChunk Object
+
+Each chunk yielded by the generator has the following structure:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `choices` | `list[StreamChoice]` | Array of streaming choices (typically one) |
+
+### StreamChoice Object
+
+Each choice in the `choices` array contains:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `index` | `int` | The index of this choice in the array |
+| `delta` | `StreamDelta` | The incremental update to the message |
+| `finish_reason` | `str \| None` | Reason why the generation stopped. Only present in the final chunk. Possible values: `"stop"`, `"length"`, `"tool_calls"`, `"content_filter"`, or `None` |
+
+**Example - Handling Multiple Choices:**
+
+```python
+for chunk in edgee.stream("gpt-4o", "Give me creative ideas"):
+ for choice in chunk.choices:
+ if choice.delta.content:
+ print(f"Choice {choice.index}: {choice.delta.content}")
+```
+
+### StreamDelta Object
+
+The `delta` object contains incremental updates:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `role` | `str \| None` | The role of the message (typically `"assistant"`). Only present in the **first chunk** |
+| `content` | `str \| None` | Incremental text content. Each chunk contains a portion of the full response |
+| `tool_calls` | `list[dict] \| None` | Array of tool calls (if any). See [Tools documentation](/sdk/python/tools) for details |
+
+## Convenience Properties
+
+The `StreamChunk` class provides convenience properties for easier access:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `text` | `str \| None` | Shortcut to `choices[0].delta.content` - the incremental text content |
+| `role` | `str \| None` | Shortcut to `choices[0].delta.role` - the message role (first chunk only) |
+| `finish_reason` | `str \| None` | Shortcut to `choices[0].finish_reason` - the finish reason (final chunk only) |
+
+**Example - Using Convenience Properties:**
+
+```python
+for chunk in edgee.stream("gpt-4o", "Explain quantum computing"):
+ # Content chunks
+ if chunk.text:
+ print(chunk.text, end="", flush=True)
+
+ # First chunk contains the role
+ if chunk.role:
+ print(f"\nRole: {chunk.role}")
+
+ # Last chunk contains finish reason
+ if chunk.finish_reason:
+ print(f"\nFinish reason: {chunk.finish_reason}")
+```
+
+## Understanding Streaming Behavior
+
+### Chunk Structure
+
+1. **First chunk**: Contains `role` (typically `"assistant"`) and may contain initial `content`
+2. **Content chunks**: Contain incremental `content` updates
+3. **Final chunk**: Contains `finish_reason` indicating why generation stopped
+
+**Example - Collecting Full Response:**
+
+```python
+full_text = ""
+
+for chunk in edgee.stream("gpt-4o", "Tell me a story"):
+ if chunk.text:
+ full_text += chunk.text
+ print(chunk.text, end="", flush=True) # Also display as it streams
+
+print(f"\n\nFull response ({len(full_text)} characters):")
+print(full_text)
+```
+
+### Finish Reasons
+
+| Value | Description |
+|-------|-------------|
+| `"stop"` | Model generated a complete response and stopped naturally |
+| `"length"` | Response was cut off due to token limit |
+| `"tool_calls"` | Model requested tool/function calls |
+| `"content_filter"` | Content was filtered by safety systems |
+| `None` | Generation is still in progress (not the final chunk) |
+
+### Empty Chunks
+
+Some chunks may not contain `content`. This is normal and can happen when:
+- The chunk only contains metadata (role, finish_reason)
+- The chunk is part of tool call processing
+- Network buffering creates empty chunks
+
+Always check for `chunk.text` before using it:
+
+```python
+for chunk in edgee.stream("gpt-4o", "Hello"):
+ if chunk.text: # ✅ Good: Check before using
+ print(chunk.text)
+ # ❌ Bad: print(chunk.text) - may print None
+```
+
+## Alternative: Using send() with stream=True
+
+You can also use the `send()` method with `stream=True` to get streaming responses:
+
+```python
+for chunk in edgee.send("gpt-4o", "Tell me a story", stream=True):
+ if chunk.text:
+ print(chunk.text, end="", flush=True)
+```
+
+The `stream()` method is a convenience wrapper that calls `send()` with `stream=True`.
+
+## Error Handling
+
+The `stream()` method can raise exceptions:
+
+```python
+try:
+ for chunk in edgee.stream("gpt-4o", "Hello!"):
+ if chunk.text:
+ print(chunk.text, end="", flush=True)
+except RuntimeError as error:
+ # API errors: "API error {status}: {message}"
+ # Network errors: Standard HTTP errors
+ print(f"Stream failed: {error}")
+```
diff --git a/sdk/python/tools.mdx b/sdk/python/tools.mdx
new file mode 100644
index 0000000..d523460
--- /dev/null
+++ b/sdk/python/tools.mdx
@@ -0,0 +1,560 @@
+---
+title: Python SDK - Tools (Function Calling)
+sidebarTitle: Tools
+description: Complete guide to function calling with the Python SDK.
+icon: square-function
+---
+
+The Edgee Python SDK supports OpenAI-compatible function calling (tools), allowing models to request execution of functions you define. This enables models to interact with external APIs, databases, and your application logic.
+
+## Overview
+
+Function calling works in two steps:
+
+1. **Request**: Send a request with tool definitions. The model may request to call one or more tools.
+2. **Execute & Respond**: Execute the requested functions and send the results back to the model.
+
+## Tool Definition
+
+A tool is defined using a dictionary with the following structure:
+
+```python
+{
+ "type": "function",
+ "function": {
+ "name": "function_name",
+ "description": "Function description",
+ "parameters": {
+ "type": "object",
+ "properties": {...},
+ "required": [...]
+ }
+ }
+}
+```
+
+### FunctionDefinition
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `name` | `str` | The name of the function (must be unique, a-z, A-Z, 0-9, _, -) |
+| `description` | `str \| None` | Description of what the function does. **Highly recommended** - helps the model understand when to use it |
+| `parameters` | `dict \| None` | JSON Schema object describing the function parameters |
+
+### Parameters Schema
+
+The `parameters` field uses JSON Schema format:
+
+```python
+{
+ "type": "object",
+ "properties": {
+ "paramName": {
+ "type": "string" | "number" | "boolean" | "object" | "array",
+ "description": "Parameter description"
+ }
+ },
+ "required": ["paramName"] # Array of required parameter names
+}
+```
+
+**Example - Defining a Tool:**
+
+```python
+response = edgee.send(
+ model="gpt-4o",
+ input={
+ "messages": [
+ {"role": "user", "content": "What is the weather in Paris?"}
+ ],
+ "tools": [
+ {
+ "type": "function",
+ "function": {
+ "name": "get_weather",
+ "description": "Get the current weather for a location",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "location": {
+ "type": "string",
+ "description": "The city and state, e.g. San Francisco, CA"
+ },
+ "unit": {
+ "type": "string",
+ "enum": ["celsius", "fahrenheit"],
+ "description": "Temperature unit"
+ }
+ },
+ "required": ["location"]
+ }
+ }
+ }
+ ],
+ "tool_choice": "auto"
+ }
+)
+```
+
+## Tool Choice
+
+The `tool_choice` parameter controls when and which tools the model should call:
+
+| Value | Type | Description |
+|-------|------|-------------|
+| `"auto"` | `str` | Let the model decide whether to call tools (default) |
+| `"none"` | `str` | Don't call any tools, even if provided |
+| `{"type": "function", "function": {"name": "function_name"}}` | `dict` | Force the model to call a specific function |
+
+**Example - Force a Specific Tool:**
+
+```python
+response = edgee.send(
+ model="gpt-4o",
+ input={
+ "messages": [
+ {"role": "user", "content": "What is the weather?"}
+ ],
+ "tools": [
+ {
+ "type": "function",
+ "function": {
+ "name": "get_weather",
+ "description": "Get the current weather",
+ "parameters": {...}
+ }
+ }
+ ],
+ "tool_choice": {
+ "type": "function",
+ "function": {"name": "get_weather"}
+ }
+ }
+)
+# Model will always call get_weather
+```
+
+**Example - Disable Tool Calls:**
+
+```python
+response = edgee.send(
+ model="gpt-4o",
+ input={
+ "messages": [
+ {"role": "user", "content": "What is the weather?"}
+ ],
+ "tools": [
+ {
+ "type": "function",
+ "function": {
+ "name": "get_weather",
+ "description": "Get the current weather",
+ "parameters": {...}
+ }
+ }
+ ],
+ "tool_choice": "none"
+ }
+)
+# Model will not call tools, even though they're available
+```
+
+## Tool Call Object Structure
+
+When the model requests a tool call, you receive a `ToolCall` object in the response:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `id` | `str` | Unique identifier for this tool call |
+| `type` | `str` | Type of tool call (typically `"function"`) |
+| `function` | `dict` | Function call details |
+| `function["name"]` | `str` | Name of the function to call |
+| `function["arguments"]` | `str` | JSON string containing the function arguments |
+
+### Parsing Arguments
+
+```python
+import json
+
+tool_call = response.tool_calls[0]
+args = json.loads(tool_call["function"]["arguments"])
+# args is now a Python dictionary
+print(args["location"]) # e.g., "Paris"
+```
+
+## Complete Example
+
+Here's a complete end-to-end example with error handling:
+
+```python
+import json
+from edgee import Edgee
+
+edgee = Edgee("your-api-key")
+
+# Define the weather function
+async def get_weather(location: str, unit: str = "celsius"):
+ # Simulate API call
+ return {
+ "location": location,
+ "temperature": 15,
+ "unit": unit,
+ "condition": "sunny"
+ }
+
+# Step 1: Initial request with tools
+response1 = edgee.send(
+ model="gpt-4o",
+ input={
+ "messages": [
+ {"role": "user", "content": "What is the weather in Paris and Tokyo?"}
+ ],
+ "tools": [
+ {
+ "type": "function",
+ "function": {
+ "name": "get_weather",
+ "description": "Get the current weather for a location",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "location": {
+ "type": "string",
+ "description": "The city name"
+ },
+ "unit": {
+ "type": "string",
+ "enum": ["celsius", "fahrenheit"],
+ "description": "Temperature unit"
+ }
+ },
+ "required": ["location"]
+ }
+ }
+ }
+ ],
+ "tool_choice": "auto"
+ }
+)
+
+# Step 2: Execute all tool calls
+messages = [
+ {"role": "user", "content": "What is the weather in Paris and Tokyo?"},
+ response1.message # Include assistant's message
+]
+
+if response1.tool_calls:
+ for tool_call in response1.tool_calls:
+ args = json.loads(tool_call["function"]["arguments"])
+ result = await get_weather(args["location"], args.get("unit"))
+
+ messages.append({
+ "role": "tool",
+ "tool_call_id": tool_call["id"],
+ "content": json.dumps(result)
+ })
+
+# Step 3: Send results back
+response2 = edgee.send(
+ model="gpt-4o",
+ input={
+ "messages": messages,
+ "tools": [
+ # Keep tools available for follow-up
+ {
+ "type": "function",
+ "function": {
+ "name": "get_weather",
+ "description": "Get the current weather for a location",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "location": {"type": "string", "description": "The city name"},
+ "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
+ },
+ "required": ["location"]
+ }
+ }
+ }
+ ]
+ }
+)
+
+print(response2.text)
+```
+
+**Example - Multiple Tools:**
+
+You can provide multiple tools and let the model choose which ones to call:
+
+```python
+response = edgee.send(
+ model="gpt-4o",
+ input={
+ "messages": [
+ {"role": "user", "content": "Get the weather in Paris and send an email about it"}
+ ],
+ "tools": [
+ {
+ "type": "function",
+ "function": {
+ "name": "get_weather",
+ "description": "Get the current weather for a location",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "location": {"type": "string", "description": "City name"}
+ },
+ "required": ["location"]
+ }
+ }
+ },
+ {
+ "type": "function",
+ "function": {
+ "name": "send_email",
+ "description": "Send an email to a recipient",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "to": {"type": "string", "description": "Recipient email address"},
+ "subject": {"type": "string", "description": "Email subject"},
+ "body": {"type": "string", "description": "Email body"}
+ },
+ "required": ["to", "subject", "body"]
+ }
+ }
+ }
+ ],
+ "tool_choice": "auto"
+ }
+)
+```
+
+## Streaming with Tools
+
+The `stream()` method also supports tools. For details about streaming, see the [Stream Method documentation](/sdk/python/stream).
+
+```python
+for chunk in edgee.stream("gpt-4o", {
+ "messages": [
+ {"role": "user", "content": "What is the weather in Paris?"}
+ ],
+ "tools": [
+ {
+ "type": "function",
+ "function": {
+ "name": "get_weather",
+ "description": "Get the current weather for a location",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "location": {"type": "string", "description": "City name"}
+ },
+ "required": ["location"]
+ }
+ }
+ }
+ ],
+ "tool_choice": "auto"
+}):
+ if chunk.text:
+ print(chunk.text, end="", flush=True)
+
+ # Check for tool calls in the delta
+ tool_calls = chunk.choices[0].delta.tool_calls if chunk.choices else None
+ if tool_calls:
+ print(f"\nTool calls detected: {tool_calls}")
+
+ if chunk.finish_reason == "tool_calls":
+ print("\nModel requested tool calls")
+```
+
+## Best Practices
+
+### 1. Always Provide Descriptions
+
+Descriptions help the model understand when to use each function:
+
+```python
+# ✅ Good
+{
+ "name": "get_weather",
+ "description": "Get the current weather conditions for a specific location",
+ "parameters": {...}
+}
+
+# ❌ Bad
+{
+ "name": "get_weather",
+ # Missing description
+ "parameters": {...}
+}
+```
+
+### 2. Use Clear Parameter Names
+
+```python
+# ✅ Good
+"properties": {
+ "location": {"type": "string", "description": "The city name"}
+}
+
+# ❌ Bad
+"properties": {
+ "loc": {"type": "string"} # Unclear name, no description
+}
+```
+
+### 3. Mark Required Parameters
+
+```python
+"parameters": {
+ "type": "object",
+ "properties": {
+ "location": {"type": "string", "description": "City name"},
+ "unit": {"type": "string", "description": "Temperature unit"}
+ },
+ "required": ["location"] # location is required, unit is optional
+}
+```
+
+### 4. Handle Multiple Tool Calls
+
+Models can request multiple tool calls in a single response. Use parallel execution when possible:
+
+```python
+import asyncio
+
+if response.tool_calls and len(response.tool_calls) > 0:
+ async def execute_tool_call(tool_call):
+ args = json.loads(tool_call["function"]["arguments"])
+ result = await execute_function(tool_call["function"]["name"], args)
+ return {
+ "tool_call_id": tool_call["id"],
+ "result": result
+ }
+
+ # Execute all tool calls in parallel
+ results = await asyncio.gather(*[
+ execute_tool_call(tool_call) for tool_call in response.tool_calls
+ ])
+
+ # Add all tool results to messages
+ for result in results:
+ messages.append({
+ "role": "tool",
+ "tool_call_id": result["tool_call_id"],
+ "content": json.dumps(result["result"])
+ })
+```
+
+**Example - Handling Multiple Tool Calls:**
+
+```python
+# Step 2: Execute all tool calls
+messages = [
+ {"role": "user", "content": "What is the weather in Paris and Tokyo?"},
+ response1.message # Include assistant's message
+]
+
+if response1.tool_calls:
+ for tool_call in response1.tool_calls:
+ args = json.loads(tool_call["function"]["arguments"])
+ result = await get_weather(args["location"], args.get("unit"))
+
+ messages.append({
+ "role": "tool",
+ "tool_call_id": tool_call["id"],
+ "content": json.dumps(result)
+ })
+```
+
+### 5. Error Handling in Tool Execution
+
+```python
+if response.tool_calls:
+ for tool_call in response.tool_calls:
+ try:
+ args = json.loads(tool_call["function"]["arguments"])
+ result = await execute_function(tool_call["function"]["name"], args)
+
+ messages.append({
+ "role": "tool",
+ "tool_call_id": tool_call["id"],
+ "content": json.dumps(result)
+ })
+ except Exception as error:
+ # Send error back to model
+ messages.append({
+ "role": "tool",
+ "tool_call_id": tool_call["id"],
+ "content": json.dumps({"error": str(error)})
+ })
+```
+
+### 6. Keep Tools Available
+
+Include tools in follow-up requests so the model can call them again if needed:
+
+```python
+response2 = edgee.send(
+ model="gpt-4o",
+ input={
+ "messages": [...messages_with_tool_results],
+ "tools": [
+ # Keep the same tools available
+ {"type": "function", "function": {...}}
+ ]
+ }
+)
+```
+
+**Example - Checking for Tool Calls:**
+
+```python
+if response.tool_calls:
+ # Model wants to call a function
+ for tool_call in response.tool_calls:
+ print(f"Function: {tool_call['function']['name']}")
+ print(f"Arguments: {tool_call['function']['arguments']}")
+```
+
+**Example - Executing Functions and Sending Results:**
+
+```python
+# Execute the function
+tool_call = response.tool_calls[0]
+args = json.loads(tool_call["function"]["arguments"])
+weather_result = await get_weather(args["location"], args.get("unit"))
+
+# Send the result back
+response2 = edgee.send(
+ model="gpt-4o",
+ input={
+ "messages": [
+ {"role": "user", "content": "What is the weather in Paris?"},
+ response.message, # Include assistant's message with tool_calls
+ {
+ "role": "tool",
+ "tool_call_id": tool_call["id"],
+ "content": json.dumps(weather_result)
+ }
+ ],
+ "tools": [
+ # Include the same tools for potential follow-up calls
+ {
+ "type": "function",
+ "function": {
+ "name": "get_weather",
+ "description": "Get the current weather for a location",
+ "parameters": {...}
+ }
+ }
+ ]
+ }
+)
+
+print(response2.text)
+# "The weather in Paris is 15°C and sunny."
+```
diff --git a/sdk/rust/configuration.mdx b/sdk/rust/configuration.mdx
new file mode 100644
index 0000000..a3d1fcb
--- /dev/null
+++ b/sdk/rust/configuration.mdx
@@ -0,0 +1,191 @@
+---
+title: Rust SDK Configuration
+sidebarTitle: Configuration
+description: Learn how to configure and instantiate the Edgee Rust SDK.
+icon: settings-2
+---
+
+The Edgee Rust SDK provides multiple ways to instantiate a client. Rust offers both idiomatic named constructors and a unified constructor for consistency with other SDKs.
+
+## Overview
+
+The Rust SDK provides several constructor methods:
+
+- `Edgee::from_env()` - Reads from environment variables (idiomatic Rust)
+- `Edgee::with_api_key()` - Creates client with just an API key (convenience)
+- `Edgee::new()` - Creates client with full `EdgeeConfig` (type-safe)
+
+## Method 1: Environment Variables (Recommended for Production)
+
+The simplest and most secure approach is to use environment variables. The SDK will automatically read `EDGEE_API_KEY` and optionally `EDGEE_BASE_URL`.
+
+```rust
+use edgee::Edgee;
+
+// Reads from EDGEE_API_KEY and EDGEE_BASE_URL environment variables
+let client = Edgee::from_env()?;
+```
+
+## Method 2: API Key Only (Quick Start)
+
+For quick testing or simple scripts, use `with_api_key()`:
+
+```rust
+use edgee::Edgee;
+
+// Creates client with default base URL (https://api.edgee.ai)
+let client = Edgee::with_api_key("your-api-key");
+```
+
+**Note**: This method uses the default base URL (`https://api.edgee.ai`). To use a custom base URL, use Method 3.
+
+## Method 3: Configuration Object (Type-Safe)
+
+For full control and type safety, use `EdgeeConfig` with the builder pattern:
+
+```rust
+use edgee::{Edgee, EdgeeConfig};
+
+// Full configuration with builder pattern
+let config = EdgeeConfig::new("your-api-key")
+ .with_base_url("https://api.edgee.ai");
+
+let client = Edgee::new(config);
+```
+
+**Important**: The `api_key` is required and must be provided either via constructor argument or `EDGEE_API_KEY` environment variable. If neither is provided, an `Error::MissingApiKey` will be returned.
+
+## Error Handling
+
+The SDK uses Rust's `Result` type for explicit error handling:
+
+```rust
+use edgee::{Edgee, Error};
+
+match Edgee::from_env() {
+ Ok(client) => {
+ // Use client
+ }
+ Err(Error::MissingApiKey) => {
+ eprintln!("API key not found. Set EDGEE_API_KEY environment variable.");
+ }
+ Err(e) => {
+ eprintln!("Error: {}", e);
+ }
+}
+```
+
+### Using `?` Operator
+
+```rust
+use edgee::Edgee;
+
+fn main() -> Result<(), Box> {
+ let client = Edgee::from_env()?;
+ // Use client
+ Ok(())
+}
+```
+
+### Custom Error Handling
+
+```rust
+use edgee::{Edgee, Error};
+
+let client = match Edgee::from_env() {
+ Ok(client) => client,
+ Err(Error::MissingApiKey) => {
+ // Fallback to explicit config
+ Edgee::with_api_key("fallback-api-key")
+ }
+ Err(e) => return Err(e.into()),
+};
+```
+
+
+## Complete Examples
+
+### Example 1: Production Setup
+
+```rust
+// .env file
+// EDGEE_API_KEY=prod-api-key
+// EDGEE_BASE_URL=https://api.edgee.ai
+
+use dotenv::dotenv;
+use edgee::Edgee;
+
+#[tokio::main]
+async fn main() -> Result<(), Box> {
+ dotenv().ok();
+ let client = Edgee::from_env()?;
+ // Use client
+ Ok(())
+}
+```
+
+### Example 2: Multi-Environment Setup
+
+```rust
+use edgee::{Edgee, EdgeeConfig};
+
+fn create_client() -> Result> {
+ let env = std::env::var("ENVIRONMENT").unwrap_or_else(|_| "development".to_string());
+
+ match env.as_str() {
+ "production" => Edgee::from_env(),
+ "staging" => {
+ let api_key = std::env::var("EDGEE_API_KEY")?;
+ Ok(Edgee::new(
+ EdgeeConfig::new(api_key)
+ ))
+ }
+ _ => Ok(Edgee::new(
+ EdgeeConfig::new("dev-api-key")
+ .with_base_url("https://eu.api.edgee.ai")
+ )),
+ }
+}
+```
+
+## Troubleshooting
+
+### "MissingApiKey" Error
+
+**Problem**: The SDK can't find your API key.
+
+**Solutions**:
+1. Set the environment variable:
+ ```bash
+ export EDGEE_API_KEY="your-api-key"
+ ```
+
+2. Use `with_api_key()`:
+ ```rust
+ let client = Edgee::with_api_key("your-api-key");
+ ```
+
+3. Use `EdgeeConfig`:
+ ```rust
+ let client = Edgee::new(EdgeeConfig::new("your-api-key"));
+ ```
+
+### Custom Base URL Not Working
+
+**Problem**: Your custom base URL isn't being used.
+
+**Check**:
+1. Verify the base URL in your configuration
+2. Check if environment variable `EDGEE_BASE_URL` is overriding it
+3. Ensure you're using the correct configuration method
+
+```rust
+// This will use the base_url from EdgeeConfig
+let client = Edgee::new(
+ EdgeeConfig::new("key")
+ .with_base_url("https://custom.example.com")
+);
+
+// This will use EDGEE_BASE_URL env var if set, otherwise default
+let client = Edgee::with_api_key("key");
+```
\ No newline at end of file
diff --git a/sdk/rust/index.mdx b/sdk/rust/index.mdx
index 7842ed1..c837077 100644
--- a/sdk/rust/index.mdx
+++ b/sdk/rust/index.mdx
@@ -1,553 +1,33 @@
---
title: Rust SDK
-sidebarTitle: Rust
+sidebarTitle: Introduction
description: Integrate the Rust SDK in your application.
-icon: rust
+icon: minus
---
The Edgee Rust SDK provides a modern, type-safe, async interface to interact with the Edgee AI Gateway. Built with Rust's powerful type system and async/await capabilities, it offers compile-time safety, zero-cost abstractions, and excellent performance.
## Installation
-Add the SDK to your `Cargo.toml`:
-
-```toml
-[dependencies]
-edgee = "2.0"
-tokio = { version = "1", features = ["full"] }
-```
-
-## Quick Start
-
-```rust
-use edgee::Edgee;
-
-#[tokio::main]
-async fn main() -> Result<(), Box> {
- let client = Edgee::from_env()?;
-
- let response = client.send("gpt-4o", "What is the capital of France?").await?;
-
- println!("{}", response.text().unwrap_or(""));
- // "The capital of France is Paris."
-
- Ok(())
-}
-```
-
-## Configuration
-
-The SDK supports multiple configuration methods:
-
-### Using Environment Variables
-
-```rust
-use edgee::Edgee;
-
-// Reads EDGEE_API_KEY and optionally EDGEE_BASE_URL
-let client = Edgee::from_env()?;
-```
-
-Set environment variables:
```bash
-export EDGEE_API_KEY="your-api-key"
-export EDGEE_BASE_URL="https://api.edgee.ai" # optional
+cargo install edgee
```
-### Using API Key
+## Quick Start
```rust
use edgee::Edgee;
-// Creates client with default base URL
let client = Edgee::with_api_key("your-api-key");
-```
-
-### Using Configuration Object
-
-```rust
-use edgee::{Edgee, EdgeeConfig};
-
-let config = EdgeeConfig::new("your-api-key")
- .with_base_url("https://api.edgee.ai");
-
-let client = Edgee::new(config);
-```
-
-## Usage Examples
-
-### Simple String Input
-
-The simplest way to send a request:
-
-```rust
-let response = client
- .send("gpt-4o", "Explain quantum computing in simple terms.")
- .await?;
+let response = client.send("gpt-4o", "What is the capital of France?").await.unwrap();
println!("{}", response.text().unwrap_or(""));
-```
-
-### Multi-turn Conversation
-
-Use the `Message` constructors for type-safe message creation:
-
-```rust
-use edgee::Message;
-
-let messages = vec![
- Message::system("You are a helpful assistant."),
- Message::user("Hello!"),
-];
-
-let response = client.send("gpt-4o", messages).await?;
-println!("{}", response.text().unwrap_or(""));
-```
-
-### Using InputObject
-
-For complex requests with tools and configuration:
-
-```rust
-use edgee::{Message, InputObject};
-
-let input = InputObject::new(vec![
- Message::system("You are a helpful assistant."),
- Message::user("What's the weather like?"),
-]);
-
-let response = client.send("gpt-4o", input).await?;
-```
-
-### Function Calling (Tools)
-
-The SDK supports OpenAI-compatible function calling with strong typing:
-
-```rust
-use edgee::{Edgee, Message, InputObject, Tool, FunctionDefinition, JsonSchema};
-use std::collections::HashMap;
-
-let client = Edgee::from_env()?;
-
-// Define a function
-let function = FunctionDefinition {
- name: "get_weather".to_string(),
- description: Some("Get the current weather for a location".to_string()),
- parameters: JsonSchema {
- schema_type: "object".to_string(),
- properties: Some({
- let mut props = HashMap::new();
- props.insert("location".to_string(), serde_json::json!({
- "type": "string",
- "description": "City name"
- }));
- props
- }),
- required: Some(vec!["location".to_string()]),
- description: None,
- },
-};
-
-// Send request with tools
-let input = InputObject::new(vec![
- Message::user("What is the weather in Paris?")
-])
-.with_tools(vec![Tool::function(function)]);
-
-let response = client.send("gpt-4o", input).await?;
-
-// Check if the model wants to call a function
-if let Some(tool_calls) = response.tool_calls() {
- for call in tool_calls {
- println!("Function: {}", call.function.name);
- println!("Arguments: {}", call.function.arguments);
- }
-}
-```
-
-### Tool Response Handling
-
-After receiving a tool call, send the function result back:
-
-```rust
-use serde_json;
-
-// First request - model requests a tool call
-let input = InputObject::new(vec![
- Message::user("What is the weather in Paris?")
-])
-.with_tools(vec![/* tool definitions */]);
-
-let response1 = client.send("gpt-4o", input).await?;
-
-// Execute the function
-if let Some(tool_calls) = response1.tool_calls() {
- let tool_call = &tool_calls[0];
-
- // Parse arguments and execute function
- let args: serde_json::Value = serde_json::from_str(&tool_call.function.arguments)?;
- let result = get_weather(&args["location"].as_str().unwrap());
-
- // Second request - include tool response
- let mut messages = vec![
- Message::user("What is the weather in Paris?")
- ];
-
- // Add assistant's message with tool calls
- if let Some(first_choice) = response1.choices.first() {
- messages.push(first_choice.message.clone());
- }
-
- // Add tool response
- messages.push(Message::tool(tool_call.id.clone(), serde_json::to_string(&result)?));
-
- let response2 = client.send("gpt-4o", messages).await?;
- println!("{}", response2.text().unwrap_or(""));
-}
-```
-
-## Streaming
-
-The SDK supports streaming responses using Rust's `Stream` trait for real-time output:
-
-```rust
-use tokio_stream::StreamExt;
-
-let mut stream = client
- .stream("gpt-4o", "Explain quantum computing")
- .await?;
-
-while let Some(result) = stream.next().await {
- match result {
- Ok(chunk) => {
- // First chunk contains the role
- if let Some(role) = chunk.role() {
- println!("Role: {:?}", role);
- }
-
- // Content chunks
- if let Some(text) = chunk.text() {
- print!("{}", text);
- std::io::Write::flush(&mut std::io::stdout())?;
- }
-
- // Last chunk contains finish reason
- if let Some(reason) = chunk.finish_reason() {
- println!("\nFinish reason: {}", reason);
- }
- }
- Err(e) => eprintln!("Stream error: {}", e),
- }
-}
-```
-
-### Streaming with Messages
-
-Streaming works with message arrays too:
-
-```rust
-use edgee::Message;
-use tokio_stream::StreamExt;
-
-let messages = vec![
- Message::system("You are a helpful assistant."),
- Message::user("Write a poem about coding"),
-];
-
-let mut stream = client.stream("gpt-4o", messages).await?;
-
-while let Some(result) = stream.next().await {
- if let Ok(chunk) = result {
- if let Some(text) = chunk.text() {
- print!("{}", text);
- }
- }
-}
-```
-
-### Collecting Full Response from Stream
-
-You can collect the entire streamed response:
-
-```rust
-use tokio_stream::StreamExt;
-
-let mut stream = client.stream("gpt-4o", "Tell me a story").await?;
-let mut full_text = String::new();
-
-while let Some(result) = stream.next().await {
- if let Ok(chunk) = result {
- if let Some(text) = chunk.text() {
- full_text.push_str(text);
- }
- }
-}
-
-println!("Full response: {}", full_text);
-```
-
-## Response Structure
-
-### Non-Streaming Response
-
-The `send` method returns a `SendResponse`:
-
-```rust
-pub struct SendResponse {
- pub id: String,
- pub object: String,
- pub created: u64,
- pub model: String,
- pub choices: Vec,
- pub usage: Option,
-}
-
-pub struct Choice {
- pub index: u32,
- pub message: Message,
- pub finish_reason: Option,
-}
-
-pub struct Usage {
- pub prompt_tokens: u32,
- pub completion_tokens: u32,
- pub total_tokens: u32,
-}
-```
-
-### Accessing Response Data
-
-The SDK provides convenience methods:
-
-```rust
-let response = client.send("gpt-4o", "Hello!").await?;
-
-// Get the first choice's content
-let content = response.text(); // Returns Option<&str>
-
-// Check finish reason
-let finish_reason = response.finish_reason(); // 'stop', 'length', 'tool_calls', etc.
-
-// Access tool calls
-if let Some(tool_calls) = response.tool_calls() {
- // Process tool calls
-}
-
-// Access token usage
-if let Some(usage) = &response.usage {
- println!("Tokens used: {}", usage.total_tokens);
- println!("Prompt tokens: {}", usage.prompt_tokens);
- println!("Completion tokens: {}", usage.completion_tokens);
-}
-```
-
-### Streaming Response
-
-Streaming returns `StreamChunk` objects:
-
-```rust
-pub struct StreamChunk {
- pub id: String,
- pub object: String,
- pub created: u64,
- pub model: String,
- pub choices: Vec,
-}
-
-pub struct StreamChoice {
- pub index: u32,
- pub delta: StreamDelta,
- pub finish_reason: Option,
-}
-
-pub struct StreamDelta {
- pub role: Option,
- pub content: Option,
- pub tool_calls: Option>,
-}
-```
-
-## Type System
-
-The SDK uses Rust's type system for safety and clarity:
-
-### Role Enum
-
-```rust
-pub enum Role {
- System,
- User,
- Assistant,
- Tool,
-}
-```
-
-### Message Constructors
-
-```rust
-// System message
-Message::system("You are a helpful assistant")
-
-// User message
-Message::user("Hello, how are you?")
-
-// Assistant message
-Message::assistant("I'm doing well, thank you!")
-
-// Tool response message
-Message::tool("tool-call-id", "function result")
-```
-
-### Tool Types
-
-```rust
-pub struct FunctionDefinition {
- pub name: String,
- pub description: Option,
- pub parameters: JsonSchema,
-}
-
-pub struct Tool {
- pub tool_type: String,
- pub function: FunctionDefinition,
-}
-
-pub struct ToolCall {
- pub id: String,
- pub call_type: String,
- pub function: FunctionCall,
-}
-```
-
-## Error Handling
-
-The SDK uses `Result` for explicit error handling with custom error types:
-
-```rust
-use edgee::{Edgee, Error};
-
-match client.send("gpt-4o", "Hello").await {
- Ok(response) => {
- println!("{}", response.text().unwrap_or(""));
- }
- Err(Error::Api { status, message }) => {
- eprintln!("API error {}: {}", status, message);
- }
- Err(Error::MissingApiKey) => {
- eprintln!("API key not found");
- }
- Err(Error::Http(e)) => {
- eprintln!("HTTP error: {}", e);
- }
- Err(Error::Json(e)) => {
- eprintln!("JSON error: {}", e);
- }
- Err(e) => {
- eprintln!("Error: {}", e);
- }
-}
-```
-
-### Error Types
-
-```rust
-pub enum Error {
- Http(reqwest::Error), // HTTP request failed
- Json(serde_json::Error), // JSON serialization failed
- MissingApiKey, // API key not provided
- Api { status: u16, message: String }, // API returned an error
- Stream(String), // Streaming error
- InvalidConfig(String), // Invalid configuration
-}
-```
-
-## Advanced Features
-
-### Concurrent Requests
-
-Use tokio's concurrency features for parallel requests:
-
-```rust
-use tokio;
-
-let (response1, response2) = tokio::join!(
- client.send("gpt-4o", "Question 1"),
- client.send("gpt-4o", "Question 2"),
-);
-
-println!("Response 1: {}", response1?.text().unwrap_or(""));
-println!("Response 2: {}", response2?.text().unwrap_or(""));
-```
-
-### Flexible Input with Into Trait
-
-The SDK accepts multiple input types through the `Into` trait:
-
-```rust
-// &str
-client.send("gpt-4o", "Hello").await?;
-
-// String
-client.send("gpt-4o", String::from("Hello")).await?;
-
-// Vec
-client.send("gpt-4o", vec![Message::user("Hello")]).await?;
-
-// InputObject
-client.send("gpt-4o", input_object).await?;
-```
-
-## Why Choose Rust SDK?
-
-### Type Safety
-- **Compile-time guarantees**: Catch errors before runtime
-- **Strong typing**: No string typos for roles, clear structure
-- **Option types**: Explicit handling of optional fields
-
-### Performance
-- **Zero-cost abstractions**: High-level API with no runtime overhead
-- **Async/await**: Non-blocking I/O for better concurrency
-- **Memory efficiency**: No garbage collection, predictable performance
-
-### Safety
-- **Ownership**: Prevents use-after-free and data races
-- **Error handling**: Explicit `Result` types
-- **Thread safety**: Safe concurrent operations
-
-### Developer Experience
-- **Rich IDE support**: Autocomplete, inline documentation
-- **Refactoring**: Compiler-assisted code changes
-- **Pattern matching**: Expressive error handling
-
-## Examples
-
-See the [examples directory](https://github.com/edgee-cloud/rust-sdk/tree/main/rust-sdk/examples) for complete working examples:
-
-- **simple.rs**: Basic usage patterns
-- **streaming.rs**: Streaming responses
-- **tools.rs**: Function calling with tool execution
-
-Run examples:
-```bash
-export EDGEE_API_KEY="your-api-key"
-cargo run --example simple
-cargo run --example streaming
-cargo run --example tools
+// "The capital of France is Paris."
```
## What's Next?
-
-
- Explore the full REST API documentation.
-
-
- Browse 200+ models available through Edgee.
-
-
- Learn about intelligent routing, observability, and privacy controls.
-
-
- Get started with Edgee in minutes.
-
-
+- **[Configuration](/sdk/rust/configuration)** - Learn how to configure and instantiate the SDK
+- **[Send Method](/sdk/rust/send)** - Complete guide to the `send()` method
+- **[Stream Method](/sdk/rust/stream)** - Learn how to stream responses
+- **[Tools](/sdk/rust/tools)** - Detailed guide to function calling
diff --git a/sdk/rust/send.mdx b/sdk/rust/send.mdx
new file mode 100644
index 0000000..92524b2
--- /dev/null
+++ b/sdk/rust/send.mdx
@@ -0,0 +1,251 @@
+---
+title: Rust SDK - Send Method
+sidebarTitle: Send
+description: Complete guide to the send() method in the Rust SDK.
+icon: send
+---
+
+The `send()` method is used to make non-streaming chat completion requests to the Edgee AI Gateway. It returns a `Result` with the model's response.
+
+## Arguments
+
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| `model` | `impl Into` | The model identifier to use (e.g., `"gpt-4o"`) |
+| `input` | `impl Into` | The input for the completion. Can be a string (`&str` or `String`), `Vec`, or `InputObject` |
+
+### Input Types
+
+The `send()` method accepts multiple input types through the `Into` trait:
+
+#### String Input
+
+When `input` is a string (`&str` or `String`), it's automatically converted to a user message:
+
+```rust
+let response = client.send("gpt-4o", "What is the capital of France?").await?;
+
+// Equivalent to: input: InputObject::new(vec![Message::user("What is the capital of France?")])
+println!("{}", response.text().unwrap_or(""));
+// "The capital of France is Paris."
+```
+
+#### `Vec`
+
+You can pass a vector of messages directly:
+
+```rust
+use edgee::Message;
+
+let messages = vec![
+ Message::system("You are a helpful assistant."),
+ Message::user("What is 2+2?"),
+];
+
+let response = client.send("gpt-4o", messages).await?;
+println!("{}", response.text().unwrap_or(""));
+// "2+2 equals 4."
+```
+
+#### InputObject
+
+When `input` is an `InputObject`, you have full control over the conversation:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `messages` | `Vec` | Array of conversation messages |
+| `tools` | `Option>` | Array of function tools available to the model |
+| `tool_choice` | `Option` | Controls which tool (if any) the model should call. See [Tools documentation](/sdk/rust/tools) for details |
+
+**Example with InputObject:**
+
+```rust
+use edgee::{Message, InputObject};
+
+let input = InputObject::new(vec![
+ Message::user("What is 2+2?")
+]);
+
+let response = client.send("gpt-4o", input).await?;
+println!("{}", response.text().unwrap_or(""));
+// "2+2 equals 4."
+```
+
+### Message Object
+
+Each message in the `messages` array is created using `Message` constructors:
+
+| Constructor | Description |
+|-------------|-------------|
+| `Message::system(content)` | System instructions that set the behavior of the assistant |
+| `Message::developer(content)` | Instructions provided by the application developer, prioritized ahead of user messages |
+| `Message::user(content)` | Instructions provided by an end user |
+| `Message::assistant(content)` | Assistant responses (can include `tool_calls`) |
+| `Message::tool(tool_call_id, content)` | Results from tool/function calls |
+
+**Message Structure:**
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `role` | `Role` | The role of the message sender: `Role::System`, `Role::Developer`, `Role::User`, `Role::Assistant`, or `Role::Tool` |
+| `content` | `Option` | The message content. Required for `System`, `User`, and `Tool` roles. Optional for `Assistant` when `tool_calls` is present |
+| `tool_calls` | `Option>` | Array of tool calls made by the assistant. Only present in `Assistant` messages |
+| `tool_call_id` | `Option` | ID of the tool call this message is responding to. Required for `Tool` role messages |
+
+**Example - System and User Messages:**
+
+```rust
+use edgee::Message;
+
+let messages = vec![
+ Message::system("You are a helpful assistant."),
+ Message::user("What is 2+2?"),
+ Message::assistant("2+2 equals 4."),
+ Message::user("What about 3+3?"),
+];
+
+let response = client.send("gpt-4o", messages).await?;
+println!("{}", response.text().unwrap_or(""));
+// "3+3 equals 6."
+```
+
+For complete tool calling examples and best practices, see [Tools documentation](/sdk/rust/tools).
+
+## Return Value
+
+The `send()` method returns a `Result`. On success, it contains:
+
+### SendResponse Object
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `id` | `String` | Unique identifier for the completion |
+| `object` | `String` | Object type (typically `"chat.completion"`) |
+| `created` | `u64` | Unix timestamp of when the completion was created |
+| `model` | `String` | Model identifier used for the completion |
+| `choices` | `Vec` | Array of completion choices (typically one) |
+| `usage` | `Option` | Token usage information (if provided by the API) |
+
+### Choice Object
+
+Each choice in the `choices` array contains:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `index` | `u32` | The index of this choice in the array |
+| `message` | `Message` | The assistant's message response |
+| `finish_reason` | `Option` | Reason why the generation stopped. Possible values: `"stop"`, `"length"`, `"tool_calls"`, `"content_filter"`, or `None` |
+
+**Example - Handling Multiple Choices:**
+
+```rust
+let response = client.send("gpt-4o", "Give me a creative idea.").await?;
+
+// Process all choices
+for choice in &response.choices {
+ println!("Choice {}: {:?}", choice.index, choice.message.content);
+ println!("Finish reason: {:?}", choice.finish_reason);
+}
+```
+
+### Message Object (in Response)
+
+The `message` in each choice has:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `role` | `Role` | The role of the message (typically `Role::Assistant`) |
+| `content` | `Option` | The text content of the response. `None` when `tool_calls` is present |
+| `tool_calls` | `Option>` | Array of tool calls requested by the model (if any). See [Tools documentation](/sdk/rust/tools) for details |
+
+### Usage Object
+
+Token usage information (when available):
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `prompt_tokens` | `u32` | Number of tokens in the prompt |
+| `completion_tokens` | `u32` | Number of tokens in the completion |
+| `total_tokens` | `u32` | Total tokens used (prompt + completion) |
+
+**Example - Accessing Token Usage:**
+
+```rust
+let response = client.send("gpt-4o", "Explain quantum computing briefly.").await?;
+
+if let Some(usage) = &response.usage {
+ println!("Prompt tokens: {}", usage.prompt_tokens);
+ println!("Completion tokens: {}", usage.completion_tokens);
+ println!("Total tokens: {}", usage.total_tokens);
+}
+```
+
+## Convenience Methods
+
+The `SendResponse` struct provides convenience methods for easier access:
+
+| Method | Return Type | Description |
+|--------|-------------|-------------|
+| `text()` | `Option<&str>` | Shortcut to `choices[0].message.content.as_deref()` |
+| `message()` | `Option<&Message>` | Shortcut to `choices[0].message` |
+| `finish_reason()` | `Option<&str>` | Shortcut to `choices[0].finish_reason.as_deref()` |
+| `tool_calls()` | `Option<&Vec>` | Shortcut to `choices[0].message.tool_calls.as_ref()` |
+
+**Example - Using Convenience Methods:**
+
+```rust
+let response = client.send("gpt-4o", "Hello!").await?;
+
+// Instead of: response.choices[0].message.content.as_deref()
+if let Some(text) = response.text() {
+ println!("{}", text);
+}
+
+// Instead of: response.choices[0].message
+if let Some(message) = response.message() {
+ println!("Role: {:?}", message.role);
+}
+
+// Instead of: response.choices[0].finish_reason.as_deref()
+if let Some(reason) = response.finish_reason() {
+ println!("Finish reason: {}", reason);
+}
+
+// Instead of: response.choices[0].message.tool_calls.as_ref()
+if let Some(tool_calls) = response.tool_calls() {
+ println!("Tool calls: {:?}", tool_calls);
+}
+```
+
+## Error Handling
+
+The `send()` method returns a `Result`, which can contain various error types:
+
+```rust
+use edgee::{Edgee, Error};
+
+match client.send("gpt-4o", "Hello!").await {
+ Ok(response) => {
+ println!("{}", response.text().unwrap_or(""));
+ }
+ Err(Error::Api { status, message }) => {
+ eprintln!("API error {}: {}", status, message);
+ }
+ Err(Error::Http(e)) => {
+ eprintln!("HTTP error: {}", e);
+ }
+ Err(Error::Json(e)) => {
+ eprintln!("JSON error: {}", e);
+ }
+ Err(e) => {
+ eprintln!("Error: {}", e);
+ }
+}
+```
+
+### Common Errors
+
+- **API errors**: `Error::Api { status, message }` - The API returned an error status
+- **HTTP errors**: `Error::Http(reqwest::Error)` - Network or HTTP errors
+- **JSON errors**: `Error::Json(serde_json::Error)` - JSON serialization/deserialization errors
+- **Missing API key**: `Error::MissingApiKey` - API key not provided
diff --git a/sdk/rust/stream.mdx b/sdk/rust/stream.mdx
new file mode 100644
index 0000000..df72ebe
--- /dev/null
+++ b/sdk/rust/stream.mdx
@@ -0,0 +1,280 @@
+---
+title: Rust SDK - Stream Method
+sidebarTitle: Stream
+description: Complete guide to the stream() method in the Rust SDK.
+icon: square-stack
+---
+
+The `stream()` method is used to make streaming chat completion requests to the Edgee AI Gateway. It returns a `Result` containing a `Stream` that yields `Result` objects as they arrive from the API.
+
+
+## Arguments
+
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| `model` | `impl Into` | The model identifier to use (e.g., `"gpt-4o"`) |
+| `input` | `impl Into` | The input for the completion. Can be a string (`&str` or `String`), `Vec`, or `InputObject` |
+
+### Input Types
+
+The `stream()` method accepts the same input types as `send()`:
+
+#### String Input
+
+When `input` is a string, it's automatically converted to a user message:
+
+```rust
+use tokio_stream::StreamExt;
+
+let mut stream = client.stream("gpt-4o", "Tell me a story").await?;
+
+while let Some(result) = stream.next().await {
+ match result {
+ Ok(chunk) => {
+ if let Some(text) = chunk.text() {
+ print!("{}", text);
+ }
+
+ if let Some(reason) = chunk.finish_reason() {
+ println!("\nFinished: {}", reason);
+ }
+ }
+ Err(e) => eprintln!("Stream error: {}", e),
+ }
+}
+// Equivalent to: input: InputObject::new(vec![Message::user("Tell me a story")])
+```
+
+#### `Vec` or `InputObject`
+
+When `input` is a `Vec` or `InputObject`, you have full control over the conversation:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `messages` | `Vec` | Array of conversation messages |
+| `tools` | `Option>` | Array of function tools available to the model |
+| `tool_choice` | `Option` | Controls which tool (if any) the model should call. See [Tools documentation](/sdk/rust/tools) for details |
+
+For details about `Message` type, see the [Send Method documentation](/sdk/rust/send#message-object).
+For details about `Tool` and `ToolChoice` types, see the [Tools documentation](/sdk/rust/tools).
+
+**Example - Streaming with Messages:**
+
+```rust
+use edgee::Message;
+use tokio_stream::StreamExt;
+
+let messages = vec![
+ Message::system("You are a helpful assistant."),
+ Message::user("Write a poem about coding"),
+];
+
+let mut stream = client.stream("gpt-4o", messages).await?;
+
+while let Some(result) = stream.next().await {
+ if let Ok(chunk) = result {
+ if let Some(text) = chunk.text() {
+ print!("{}", text);
+ }
+ }
+}
+```
+
+## Return Value
+
+The `stream()` method returns a `Result` containing a `Stream` that yields `Result`. Each chunk contains incremental updates to the response.
+
+### StreamChunk Object
+
+Each chunk yielded by the stream has the following structure:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `id` | `String` | Unique identifier for the completion |
+| `object` | `String` | Object type (typically `"chat.completion.chunk"`) |
+| `created` | `u64` | Unix timestamp of when the chunk was created |
+| `model` | `String` | Model identifier used for the completion |
+| `choices` | `Vec` | Array of streaming choices (typically one) |
+
+### StreamChoice Object
+
+Each choice in the `choices` array contains:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `index` | `u32` | The index of this choice in the array |
+| `delta` | `StreamDelta` | The incremental update to the message |
+| `finish_reason` | `Option` | Reason why the generation stopped. Only present in the final chunk. Possible values: `"stop"`, `"length"`, `"tool_calls"`, `"content_filter"`, or `None` |
+
+**Example - Handling Multiple Choices:**
+
+```rust
+use tokio_stream::StreamExt;
+
+let mut stream = client.stream("gpt-4o", "Give me creative ideas").await?;
+
+while let Some(result) = stream.next().await {
+ if let Ok(chunk) = result {
+ for choice in &chunk.choices {
+ if let Some(content) = &choice.delta.content {
+ println!("Choice {}: {}", choice.index, content);
+ }
+ }
+ }
+}
+```
+
+### StreamDelta Object
+
+The `delta` object contains incremental updates:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `role` | `Option` | The role of the message (typically `Role::Assistant`). Only present in the **first chunk** |
+| `content` | `Option` | Incremental text content. Each chunk contains a portion of the full response |
+| `tool_calls` | `Option>` | Array of tool calls (if any). See [Tools documentation](/sdk/rust/tools) for details |
+
+## Convenience Methods
+
+The `StreamChunk` struct provides convenience methods for easier access:
+
+| Method | Return Type | Description |
+|--------|-------------|-------------|
+| `text()` | `Option<&str>` | Shortcut to `choices[0].delta.content.as_deref()` - the incremental text content |
+| `role()` | `Option<&Role>` | Shortcut to `choices[0].delta.role.as_ref()` - the message role (first chunk only) |
+| `finish_reason()` | `Option<&str>` | Shortcut to `choices[0].finish_reason.as_deref()` - the finish reason (final chunk only) |
+
+**Example - Using Convenience Methods:**
+
+```rust
+use tokio_stream::StreamExt;
+
+let mut stream = client.stream("gpt-4o", "Explain quantum computing").await?;
+
+while let Some(result) = stream.next().await {
+ match result {
+ Ok(chunk) => {
+ // Content chunks
+ if let Some(text) = chunk.text() {
+ print!("{}", text);
+ }
+
+ // First chunk contains the role
+ if let Some(role) = chunk.role() {
+ println!("\nRole: {:?}", role);
+ }
+
+ // Last chunk contains finish reason
+ if let Some(reason) = chunk.finish_reason() {
+ println!("\nFinish reason: {}", reason);
+ }
+ }
+ Err(e) => eprintln!("Stream error: {}", e),
+ }
+}
+```
+
+## Understanding Streaming Behavior
+
+### Chunk Structure
+
+1. **First chunk**: Contains `role` (typically `Role::Assistant`) and may contain initial `content`
+2. **Content chunks**: Contain incremental `content` updates
+3. **Final chunk**: Contains `finish_reason` indicating why generation stopped
+
+**Example - Collecting Full Response:**
+
+```rust
+use tokio_stream::StreamExt;
+
+let mut stream = client.stream("gpt-4o", "Tell me a story").await?;
+let mut full_text = String::new();
+
+while let Some(result) = stream.next().await {
+ match result {
+ Ok(chunk) => {
+ if let Some(text) = chunk.text() {
+ full_text.push_str(text);
+ print!("{}", text); // Also display as it streams
+ }
+ }
+ Err(e) => eprintln!("Stream error: {}", e),
+ }
+}
+
+println!("\n\nFull response ({} characters):", full_text.len());
+println!("{}", full_text);
+```
+
+### Finish Reasons
+
+| Value | Description |
+|-------|-------------|
+| `"stop"` | Model generated a complete response and stopped naturally |
+| `"length"` | Response was cut off due to token limit |
+| `"tool_calls"` | Model requested tool/function calls |
+| `"content_filter"` | Content was filtered by safety systems |
+| `None` | Generation is still in progress (not the final chunk) |
+
+### Empty Chunks
+
+Some chunks may not contain `content`. This is normal and can happen when:
+- The chunk only contains metadata (role, finish_reason)
+- The chunk is part of tool call processing
+- Network buffering creates empty chunks
+
+Always check for `chunk.text()` before using it:
+
+```rust
+use tokio_stream::StreamExt;
+
+let mut stream = client.stream("gpt-4o", "Hello").await?;
+
+while let Some(result) = stream.next().await {
+ if let Ok(chunk) = result {
+ if let Some(text) = chunk.text() { // ✅ Good: Check before using
+ println!("{}", text);
+ }
+ // ❌ Bad: println!("{:?}", chunk.text()) - may print None
+ }
+}
+```
+
+## Error Handling
+
+The `stream()` method can return errors at two levels:
+
+1. **Initial error**: When creating the stream (returns `Result`)
+2. **Stream errors**: Individual chunks may contain errors (returns `Result`)
+
+```rust
+use edgee::Error;
+use tokio_stream::StreamExt;
+
+// Handle initial error
+let mut stream = match client.stream("gpt-4o", "Hello!").await {
+ Ok(stream) => stream,
+ Err(Error::Api { status, message }) => {
+ eprintln!("API error {}: {}", status, message);
+ return;
+ }
+ Err(e) => {
+ eprintln!("Error creating stream: {}", e);
+ return;
+ }
+};
+
+// Handle stream errors
+while let Some(result) = stream.next().await {
+ match result {
+ Ok(chunk) => {
+ if let Some(text) = chunk.text() {
+ print!("{}", text);
+ }
+ }
+ Err(e) => {
+ eprintln!("Stream error: {}", e);
+ }
+ }
+}
+```
diff --git a/sdk/rust/tools.mdx b/sdk/rust/tools.mdx
new file mode 100644
index 0000000..46c16df
--- /dev/null
+++ b/sdk/rust/tools.mdx
@@ -0,0 +1,548 @@
+---
+title: Rust SDK - Tools (Function Calling)
+sidebarTitle: Tools
+description: Complete guide to function calling with the Rust SDK.
+icon: square-function
+---
+
+The Edgee Rust SDK supports OpenAI-compatible function calling (tools), allowing models to request execution of functions you define. This enables models to interact with external APIs, databases, and your application logic.
+
+## Overview
+
+Function calling works in two steps:
+
+1. **Request**: Send a request with tool definitions. The model may request to call one or more tools.
+2. **Execute & Respond**: Execute the requested functions and send the results back to the model.
+
+## Tool Definition
+
+A tool is defined using the `Tool` struct:
+
+```rust
+use edgee::{Tool, FunctionDefinition, JsonSchema};
+use std::collections::HashMap;
+
+let tool = Tool::function(FunctionDefinition {
+ name: "function_name".to_string(),
+ description: Some("Function description".to_string()),
+ parameters: JsonSchema {
+ schema_type: "object".to_string(),
+ properties: Some(HashMap::new()),
+ required: Some(vec![]),
+ description: None,
+ },
+});
+```
+
+### FunctionDefinition
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `name` | `String` | The name of the function (must be unique, a-z, A-Z, 0-9, _, -) |
+| `description` | `Option` | Description of what the function does. **Highly recommended** - helps the model understand when to use it |
+| `parameters` | `JsonSchema` | JSON Schema object describing the function parameters |
+
+### Parameters Schema
+
+The `parameters` field uses JSON Schema format via the `JsonSchema` struct:
+
+```rust
+use edgee::JsonSchema;
+use std::collections::HashMap;
+
+let parameters = JsonSchema {
+ schema_type: "object".to_string(),
+ properties: Some({
+ let mut props = HashMap::new();
+ props.insert("paramName".to_string(), serde_json::json!({
+ "type": "string",
+ "description": "Parameter description"
+ }));
+ props
+ }),
+ required: Some(vec!["paramName".to_string()]),
+ description: None,
+};
+```
+
+**Example - Defining a Tool:**
+
+```rust
+use edgee::{Edgee, Message, InputObject, Tool, FunctionDefinition, JsonSchema};
+use std::collections::HashMap;
+
+let client = Edgee::from_env()?;
+
+let function = FunctionDefinition {
+ name: "get_weather".to_string(),
+ description: Some("Get the current weather for a location".to_string()),
+ parameters: JsonSchema {
+ schema_type: "object".to_string(),
+ properties: Some({
+ let mut props = HashMap::new();
+ props.insert("location".to_string(), serde_json::json!({
+ "type": "string",
+ "description": "The city and state, e.g. San Francisco, CA"
+ }));
+ props.insert("unit".to_string(), serde_json::json!({
+ "type": "string",
+ "enum": ["celsius", "fahrenheit"],
+ "description": "Temperature unit"
+ }));
+ props
+ }),
+ required: Some(vec!["location".to_string()]),
+ description: None,
+ },
+};
+
+let input = InputObject::new(vec![
+ Message::user("What is the weather in Paris?")
+])
+.with_tools(vec![Tool::function(function)]);
+
+let response = client.send("gpt-4o", input).await?;
+```
+
+## Tool Choice
+
+The `tool_choice` parameter controls when and which tools the model should call. In Rust, this is set using `serde_json::Value`:
+
+| Value | Type | Description |
+|-------|------|-------------|
+| `"auto"` | `serde_json::Value` | Let the model decide whether to call tools (default) |
+| `"none"` | `serde_json::Value` | Don't call any tools, even if provided |
+| `{"type": "function", "function": {"name": "function_name"}}` | `serde_json::Value` | Force the model to call a specific function |
+
+**Example - Force a Specific Tool:**
+
+```rust
+use serde_json::json;
+
+let input = InputObject::new(vec![
+ Message::user("What is the weather?")
+])
+.with_tools(vec![Tool::function(function)])
+.with_tool_choice(json!({
+ "type": "function",
+ "function": {"name": "get_weather"}
+}));
+
+let response = client.send("gpt-4o", input).await?;
+// Model will always call get_weather
+```
+
+**Example - Disable Tool Calls:**
+
+```rust
+use serde_json::json;
+
+let input = InputObject::new(vec![
+ Message::user("What is the weather?")
+])
+.with_tools(vec![Tool::function(function)])
+.with_tool_choice(json!("none"));
+
+let response = client.send("gpt-4o", input).await?;
+// Model will not call tools, even though they're available
+```
+
+## Tool Call Object Structure
+
+When the model requests a tool call, you receive a `ToolCall` object in the response:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `id` | `String` | Unique identifier for this tool call |
+| `call_type` | `String` | Type of tool call (typically `"function"`) |
+| `function` | `FunctionCall` | Function call details |
+| `function.name` | `String` | Name of the function to call |
+| `function.arguments` | `String` | JSON string containing the function arguments |
+
+### Parsing Arguments
+
+```rust
+use serde_json;
+
+if let Some(tool_calls) = response.tool_calls() {
+ let tool_call = &tool_calls[0];
+ let args: serde_json::Value = serde_json::from_str(&tool_call.function.arguments)?;
+ // args is now a serde_json::Value
+ println!("Location: {}", args["location"]);
+}
+```
+
+## Complete Example
+
+Here's a complete end-to-end example with error handling:
+
+```rust
+use edgee::{Edgee, Message, InputObject, Tool, FunctionDefinition, JsonSchema};
+use std::collections::HashMap;
+use serde_json;
+
+#[tokio::main]
+async fn main() -> Result<(), Box> {
+ let client = Edgee::from_env()?;
+
+ // Define the weather function
+ let function = FunctionDefinition {
+ name: "get_weather".to_string(),
+ description: Some("Get the current weather for a location".to_string()),
+ parameters: JsonSchema {
+ schema_type: "object".to_string(),
+ properties: Some({
+ let mut props = HashMap::new();
+ props.insert("location".to_string(), serde_json::json!({
+ "type": "string",
+ "description": "The city name"
+ }));
+ props.insert("unit".to_string(), serde_json::json!({
+ "type": "string",
+ "enum": ["celsius", "fahrenheit"],
+ "description": "Temperature unit"
+ }));
+ props
+ }),
+ required: Some(vec!["location".to_string()]),
+ description: None,
+ },
+ };
+
+ // Step 1: Initial request with tools
+ let input = InputObject::new(vec![
+ Message::user("What is the weather in Paris and Tokyo?")
+ ])
+ .with_tools(vec![Tool::function(function)]);
+
+ let response1 = client.send("gpt-4o", input).await?;
+
+ // Step 2: Execute all tool calls
+ let mut messages = vec![
+ Message::user("What is the weather in Paris and Tokyo?")
+ ];
+
+ // Add assistant's message
+ if let Some(message) = response1.message() {
+ messages.push(message.clone());
+ }
+
+ if let Some(tool_calls) = response1.tool_calls() {
+ for tool_call in tool_calls {
+ let args: serde_json::Value = serde_json::from_str(&tool_call.function.arguments)?;
+ let result = get_weather(
+ args["location"].as_str().unwrap(),
+ args.get("unit").and_then(|v| v.as_str())
+ );
+
+ messages.push(Message::tool(
+ tool_call.id.clone(),
+ serde_json::to_string(&result)?
+ ));
+ }
+ }
+
+ // Step 3: Send results back
+ let function2 = FunctionDefinition {
+ name: "get_weather".to_string(),
+ description: Some("Get the current weather for a location".to_string()),
+ parameters: JsonSchema {
+ schema_type: "object".to_string(),
+ properties: Some({
+ let mut props = HashMap::new();
+ props.insert("location".to_string(), serde_json::json!({
+ "type": "string",
+ "description": "The city name"
+ }));
+ props.insert("unit".to_string(), serde_json::json!({
+ "type": "string",
+ "enum": ["celsius", "fahrenheit"]
+ }));
+ props
+ }),
+ required: Some(vec!["location".to_string()]),
+ description: None,
+ },
+ };
+
+ let input2 = InputObject::new(messages)
+ .with_tools(vec![Tool::function(function2)]);
+
+ let response2 = client.send("gpt-4o", input2).await?;
+ println!("{}", response2.text().unwrap_or(""));
+
+ Ok(())
+}
+
+fn get_weather(location: &str, unit: Option<&str>) -> serde_json::Value {
+ serde_json::json!({
+ "location": location,
+ "temperature": 15,
+ "unit": unit.unwrap_or("celsius"),
+ "condition": "sunny"
+ })
+}
+```
+
+**Example - Multiple Tools:**
+
+You can provide multiple tools and let the model choose which ones to call:
+
+```rust
+let get_weather_tool = Tool::function(get_weather_function);
+let send_email_tool = Tool::function(send_email_function);
+
+let input = InputObject::new(vec![
+ Message::user("Get the weather in Paris and send an email about it")
+])
+.with_tools(vec![get_weather_tool, send_email_tool]);
+
+let response = client.send("gpt-4o", input).await?;
+```
+
+## Streaming with Tools
+
+The `stream()` method also supports tools. For details about streaming, see the [Stream Method documentation](/sdk/rust/stream).
+
+```rust
+use tokio_stream::StreamExt;
+
+let input = InputObject::new(vec![
+ Message::user("What is the weather in Paris?")
+])
+.with_tools(vec![Tool::function(function)]);
+
+let mut stream = client.stream("gpt-4o", input).await?;
+
+while let Some(result) = stream.next().await {
+ match result {
+ Ok(chunk) => {
+ if let Some(text) = chunk.text() {
+ print!("{}", text);
+ }
+
+ // Check for tool calls in the delta
+ if let Some(choice) = chunk.choices.first() {
+ if let Some(tool_calls) = &choice.delta.tool_calls {
+ println!("\nTool calls detected: {:?}", tool_calls);
+ }
+ }
+
+ if chunk.finish_reason() == Some("tool_calls") {
+ println!("\nModel requested tool calls");
+ }
+ }
+ Err(e) => eprintln!("Stream error: {}", e),
+ }
+}
+```
+
+## Best Practices
+
+### 1. Always Provide Descriptions
+
+Descriptions help the model understand when to use each function:
+
+```rust
+// ✅ Good
+let function = FunctionDefinition {
+ name: "get_weather".to_string(),
+ description: Some("Get the current weather conditions for a specific location".to_string()),
+ parameters: JsonSchema { /* ... */ },
+};
+
+// ❌ Bad
+let function = FunctionDefinition {
+ name: "get_weather".to_string(),
+ description: None, // Missing description
+ parameters: JsonSchema { /* ... */ },
+};
+```
+
+### 2. Use Clear Parameter Names
+
+```rust
+// ✅ Good
+properties.insert("location".to_string(), serde_json::json!({
+ "type": "string",
+ "description": "The city name"
+}));
+
+// ❌ Bad
+properties.insert("loc".to_string(), serde_json::json!({
+ "type": "string"
+ // Unclear name, no description
+}));
+```
+
+### 3. Mark Required Parameters
+
+```rust
+let parameters = JsonSchema {
+ schema_type: "object".to_string(),
+ properties: Some({
+ let mut props = HashMap::new();
+ props.insert("location".to_string(), serde_json::json!({
+ "type": "string",
+ "description": "City name"
+ }));
+ props.insert("unit".to_string(), serde_json::json!({
+ "type": "string",
+ "description": "Temperature unit"
+ }));
+ props
+ }),
+ required: Some(vec!["location".to_string()]), // location is required, unit is optional
+ description: None,
+};
+```
+
+### 4. Handle Multiple Tool Calls
+
+Models can request multiple tool calls in a single response. Use parallel execution when possible:
+
+```rust
+use futures::future;
+
+if let Some(tool_calls) = response.tool_calls() {
+ // Execute all tool calls in parallel
+ let results: Vec<_> = future::join_all(
+ tool_calls.iter().map(|tool_call| {
+ let args: serde_json::Value = serde_json::from_str(&tool_call.function.arguments)?;
+ let result = execute_function(&tool_call.function.name, &args)?;
+ Ok((tool_call.id.clone(), result))
+ })
+ ).await;
+
+ // Add all tool results to messages
+ for (tool_call_id, result) in results {
+ messages.push(Message::tool(
+ tool_call_id,
+ serde_json::to_string(&result)?
+ ));
+ }
+}
+```
+
+**Example - Handling Multiple Tool Calls:**
+
+```rust
+// Step 2: Execute all tool calls
+let mut messages = vec![
+ Message::user("What is the weather in Paris and Tokyo?"),
+];
+
+if let Some(message) = response1.message() {
+ messages.push(message.clone());
+}
+
+if let Some(tool_calls) = response1.tool_calls() {
+ for tool_call in tool_calls {
+ let args: serde_json::Value = serde_json::from_str(&tool_call.function.arguments)?;
+ let result = get_weather(
+ args["location"].as_str().unwrap(),
+ args.get("unit").and_then(|v| v.as_str())
+ );
+
+ messages.push(Message::tool(
+ tool_call.id.clone(),
+ serde_json::to_string(&result)?
+ ));
+ }
+}
+```
+
+### 5. Error Handling in Tool Execution
+
+```rust
+if let Some(tool_calls) = response.tool_calls() {
+ for tool_call in tool_calls {
+ match serde_json::from_str::(&tool_call.function.arguments) {
+ Ok(args) => {
+ match execute_function(&tool_call.function.name, &args) {
+ Ok(result) => {
+ messages.push(Message::tool(
+ tool_call.id.clone(),
+ serde_json::to_string(&result)?
+ ));
+ }
+ Err(e) => {
+ // Send error back to model
+ messages.push(Message::tool(
+ tool_call.id.clone(),
+ serde_json::to_string(&serde_json::json!({
+ "error": e.to_string()
+ }))?
+ ));
+ }
+ }
+ }
+ Err(e) => {
+ eprintln!("Failed to parse arguments: {}", e);
+ }
+ }
+ }
+}
+```
+
+### 6. Keep Tools Available
+
+Include tools in follow-up requests so the model can call them again if needed:
+
+```rust
+let input2 = InputObject::new(messages_with_tool_results)
+ .with_tools(vec![
+ // Keep the same tools available
+ Tool::function(function)
+ ]);
+
+let response2 = client.send("gpt-4o", input2).await?;
+```
+
+**Example - Checking for Tool Calls:**
+
+```rust
+if let Some(tool_calls) = response.tool_calls() {
+ // Model wants to call a function
+ for tool_call in tool_calls {
+ println!("Function: {}", tool_call.function.name);
+ println!("Arguments: {}", tool_call.function.arguments);
+ }
+}
+```
+
+**Example - Executing Functions and Sending Results:**
+
+```rust
+// Execute the function
+if let Some(tool_calls) = response.tool_calls() {
+ let tool_call = &tool_calls[0];
+ let args: serde_json::Value = serde_json::from_str(&tool_call.function.arguments)?;
+ let weather_result = get_weather(
+ args["location"].as_str().unwrap(),
+ args.get("unit").and_then(|v| v.as_str())
+ );
+
+ // Send the result back
+ let mut messages = vec![
+ Message::user("What is the weather in Paris?"),
+ ];
+
+ // Include assistant's message with tool_calls
+ if let Some(message) = response.message() {
+ messages.push(message.clone());
+ }
+
+ messages.push(Message::tool(
+ tool_call.id.clone(),
+ serde_json::to_string(&weather_result)?
+ ));
+
+ let input2 = InputObject::new(messages)
+ .with_tools(vec![Tool::function(function)]);
+
+ let response2 = client.send("gpt-4o", input2).await?;
+ println!("{}", response2.text().unwrap_or(""));
+ // "The weather in Paris is 15°C and sunny."
+}
+```
diff --git a/sdk/typescript/configuration.mdx b/sdk/typescript/configuration.mdx
new file mode 100644
index 0000000..40ba4b5
--- /dev/null
+++ b/sdk/typescript/configuration.mdx
@@ -0,0 +1,173 @@
+---
+title: TypeScript SDK Configuration
+sidebarTitle: Configuration
+description: Learn how to configure and instantiate the Edgee TypeScript SDK.
+icon: settings-2
+---
+
+The Edgee TypeScript SDK provides flexible ways to instantiate a client. All methods support automatic fallback to environment variables if configuration is not fully provided.
+
+## Overview
+
+The `Edgee` class constructor accepts multiple input types:
+
+- `undefined` or no arguments - reads from environment variables
+- `string` - API key string (backward compatible)
+- `EdgeeConfig` - Configuration interface (type-safe)
+
+## Method 1: Environment Variables (Recommended for Production)
+
+The simplest and most secure approach is to use environment variables.
+
+The SDK automatically reads `EDGEE_API_KEY` (required) and optionally `EDGEE_BASE_URL` from your environment variables.
+
+```typescript
+import Edgee from 'edgee';
+
+// EDGEE_API_KEY and (optionally) EDGEE_BASE_URL are loaded from the environment
+const edgee = new Edgee();
+```
+
+
+## Method 2: String API Key (Quick Start)
+
+For quick testing or simple scripts, pass the API key directly as a string:
+
+```typescript
+import Edgee from 'edgee';
+
+// API key only (uses default base URL: https://api.edgee.ai)
+const edgee = new Edgee('your-api-key');
+```
+
+**Note**: This method uses the default base URL (`https://api.edgee.ai`). To use a custom base URL, use Method 3 or 4.
+
+## Method 3: Configuration Object (Type-Safe)
+
+For better type safety, IDE support, and code clarity, use the `EdgeeConfig` interface:
+
+```typescript
+import Edgee, { type EdgeeConfig } from 'edgee';
+
+// Full configuration
+const edgee = new Edgee({
+ apiKey: 'your-api-key',
+ baseUrl: 'https://api.edgee.ai' // optional, defaults to https://api.edgee.ai
+});
+
+// Or Using the EdgeeConfig type
+const config: EdgeeConfig = {
+ apiKey: 'your-api-key',
+ baseUrl: 'https://api.edgee.ai'
+};
+const edgee = new Edgee(config);
+```
+
+## Configuration Priority
+
+The SDK uses the following priority order when resolving configuration:
+
+1. **Constructor argument** (if provided)
+2. **Environment variable** (if constructor argument is missing)
+3. **Default value** (for `baseUrl` only, defaults to `https://api.edgee.ai`)
+
+
+
+## Complete Examples
+
+### Example 1: Production Setup
+
+```typescript
+// .env file
+// EDGEE_API_KEY=prod-api-key
+// EDGEE_BASE_URL=https://api.edgee.ai
+
+import 'dotenv/config';
+import Edgee from 'edgee';
+
+const edgee = new Edgee(); // Reads from environment
+```
+
+### Example 2: Multi-Environment Setup
+
+```typescript
+import Edgee from 'edgee';
+
+const ENV = process.env.NODE_ENV || 'development';
+
+let edgee: Edgee;
+
+if (ENV === 'production') {
+ edgee = new Edgee(); // Use environment variables
+} else if (ENV === 'staging') {
+ edgee = new Edgee({
+ apiKey: process.env.EDGEE_API_KEY!
+ });
+} else {
+ edgee = new Edgee({
+ apiKey: 'dev-api-key',
+ baseUrl: 'https://eu.api.edgee.ai'
+ });
+}
+```
+
+### Example 3: Next.js Setup
+
+```typescript
+// lib/edgee.ts
+import Edgee from 'edgee';
+
+export function createEdgeeClient() {
+ return new Edgee({
+ apiKey: process.env.EDGEE_API_KEY!
+ });
+}
+```
+
+## Troubleshooting
+
+### "EDGEE_API_KEY is not set" Error
+
+**Problem**: The SDK can't find your API key.
+
+**Solutions**:
+1. Set the environment variable:
+ ```bash
+ export EDGEE_API_KEY="your-api-key"
+ ```
+
+2. Pass it directly:
+ ```typescript
+ const edgee = new Edgee('your-api-key');
+ ```
+
+3. Use configuration object:
+ ```typescript
+ const edgee = new Edgee({ apiKey: 'your-api-key' });
+ ```
+
+### Custom Base URL Not Working
+
+**Problem**: Your custom base URL isn't being used.
+
+**Check**:
+1. Verify the base URL in your configuration
+2. Check if environment variable `EDGEE_BASE_URL` is overriding it
+3. Ensure you're using the correct configuration method
+
+```typescript
+// This will use the baseUrl from config object
+const edgee = new Edgee({
+ apiKey: 'key',
+ baseUrl: 'https://custom.example.com'
+});
+
+// This will use EDGEE_BASE_URL env var if set, otherwise default
+const edgee = new Edgee('key');
+```
+
+## Related Documentation
+
+- [TypeScript SDK Overview](/sdk/typescript) - Complete SDK documentation
+- [API Reference](/api-reference) - REST API documentation
+- [Quickstart Guide](/quickstart) - Get started with Edgee
diff --git a/sdk/typescript/index.mdx b/sdk/typescript/index.mdx
index 54e8b2a..4cc0256 100644
--- a/sdk/typescript/index.mdx
+++ b/sdk/typescript/index.mdx
@@ -1,11 +1,11 @@
---
title: TypeScript SDK
-sidebarTitle: Typescript
+sidebarTitle: Introduction
description: Integrate the TypeScript SDK in your application.
-icon: "https://d3gk2c5xim1je2.cloudfront.net/devicon/typescript.svg"
+icon: minus
---
-The Edgee TypeScript SDK provides a lightweight, type-safe interface to interact with the Edgee AI Gateway. It supports OpenAI-compatible chat completions and function calling.
+The Edgee TypeScript SDK provides a lightweight, type-safe interface to interact with the Edgee AI Gateway. It supports OpenAI-compatible chat completions, function calling, and streaming.
## Installation
@@ -18,367 +18,23 @@ npm install edgee
```typescript
import Edgee from 'edgee';
-const edgee = new Edgee(process.env.EDGEE_API_KEY);
+// Create client
+const edgee = new Edgee("your-api-key");
+// Send a simple request
const response = await edgee.send({
model: 'gpt-4o',
input: 'What is the capital of France?',
});
-console.log(response.choices[0].message.content);
+// Access the response
+console.log(response.text);
// "The capital of France is Paris."
```
-## Configuration
-
-The SDK can be configured in multiple ways:
-
-### Using Environment Variables
-
-```typescript
-// Set EDGEE_API_KEY
-const edgee = new Edgee();
-```
-
-### Using Constructor Parameters
-
-```typescript
-// String API key (backward compatible)
-const edgee = new Edgee('your-api-key');
-
-// Configuration object
-const edgee = new Edgee({
- apiKey: 'your-api-key',
- baseUrl: 'https://api.edgee.ai', // optional, defaults to https://api.edgee.ai
-});
-```
-
-## Usage Examples
-
-### Simple String Input
-
-The simplest way to send a request is with a string input:
-
-```typescript
-const response = await edgee.send({
- model: 'gpt-4o',
- input: 'Explain quantum computing in simple terms.',
-});
-
-console.log(response.choices[0].message.content);
-```
-
-### Full Message Array
-
-For more control, use a full message array:
-
-```typescript
-const response = await edgee.send({
- model: 'gpt-4o',
- input: {
- messages: [
- { role: 'system', content: 'You are a helpful assistant.' },
- { role: 'user', content: 'Hello!' },
- ],
- },
-});
-
-console.log(response.choices[0].message.content);
-```
-
-### Function Calling (Tools)
-
-The SDK supports OpenAI-compatible function calling:
-
-```typescript
-const response = await edgee.send({
- model: 'gpt-4o',
- input: {
- messages: [
- { role: 'user', content: 'What is the weather in Paris?' },
- ],
- tools: [
- {
- type: 'function',
- function: {
- name: 'get_weather',
- description: 'Get the current weather for a location',
- parameters: {
- type: 'object',
- properties: {
- location: {
- type: 'string',
- description: 'City name',
- },
- },
- required: ['location'],
- },
- },
- },
- ],
- tool_choice: 'auto', // or 'none', or { type: 'function', function: { name: 'get_weather' } }
- },
-});
-
-// Check if the model wants to call a function
-if (response.choices[0].message.tool_calls) {
- const toolCall = response.choices[0].message.tool_calls[0];
- console.log('Function:', toolCall.function.name);
- console.log('Arguments:', JSON.parse(toolCall.function.arguments));
-}
-```
-
-### Tool Response Handling
-
-After receiving a tool call, you can send the function result back:
-
-```typescript
-// First request - model requests a tool call
-const response1 = await edgee.send({
- model: 'gpt-4o',
- input: {
- messages: [{ role: 'user', content: 'What is the weather in Paris?' }],
- tools: [/* ... tool definitions ... */],
- },
-});
-
-// Execute the function and send the result
-const toolCall = response1.choices[0].message.tool_calls[0];
-const functionResult = getWeather(JSON.parse(toolCall.function.arguments));
-
-// Second request - include tool response
-const response2 = await edgee.send({
- model: 'gpt-4o',
- input: {
- messages: [
- { role: 'user', content: 'What is the weather in Paris?' },
- response1.choices[0].message, // Include the assistant's message with tool_calls
- {
- role: 'tool',
- tool_call_id: toolCall.id,
- content: JSON.stringify(functionResult),
- },
- ],
- },
-});
-
-console.log(response2.choices[0].message.content);
-```
-
-## Streaming
-
-The SDK supports streaming responses for real-time output. Use streaming when you want to display tokens as they're generated.
-
-Use `stream()` to access full chunk metadata:
-
-```typescript
-// Stream full chunks with metadata
-for await (const chunk of edgee.stream('gpt-4o', 'Explain quantum computing')) {
- // First chunk contains the role
- if (chunk.role) {
- console.log('Role:', chunk.role);
- }
-
- // Content chunks
- if (chunk.text) {
- process.stdout.write(chunk.text);
- }
-
- // Last chunk contains finish reason
- if (chunk.finishReason) {
- console.log('\nFinish reason:', chunk.finishReason);
- }
-}
-```
-
-### Streaming with Messages
-
-Streaming works with full message arrays too:
-
-```typescript
-for await (const chunk of edgee.stream('gpt-4o', {
- messages: [
- { role: 'system', content: 'You are a helpful assistant.' },
- { role: 'user', content: 'Write a poem about coding' },
- ],
-})) {
- if (chunk.text) {
- process.stdout.write(chunk.text);
- }
-}
-```
-
-### Streaming Response Types
-
-Streaming uses different response types:
-
-```typescript
-// StreamChunk - returned by stream()
-interface StreamChunk {
- choices: {
- index: number;
- delta: {
- role?: string;
- content?: string;
- tool_calls?: ToolCall[];
- };
- finish_reason?: string | null;
- }[];
-
- // Convenience properties
- text: string | null; // Get content from first choice
- role: string | null; // Get role from first choice
- finishReason: string | null; // Get finish_reason from first choice
-}
-```
-
-### Convenience Properties
-
-Both `SendResponse` and `StreamChunk` have convenience properties for easier access:
-
-```typescript
-// Non-streaming response
-const response = await edgee.send({ model: 'gpt-4o', input: 'Hello' });
-console.log(response.text); // Instead of response.choices[0].message.content
-console.log(response.finishReason); // Instead of response.choices[0].finish_reason
-console.log(response.toolCalls); // Instead of response.choices[0].message.tool_calls
-
-// Streaming response
-for await (const chunk of edgee.stream('gpt-4o', 'Hello')) {
- console.log(chunk.text); // Instead of chunk.choices[0]?.delta?.content
- console.log(chunk.role); // Instead of chunk.choices[0]?.delta?.role
- console.log(chunk.finishReason); // Instead of chunk.choices[0]?.finish_reason
-}
-```
-
-## Response Structure
-
-The `send` method returns a `SendResponse` object:
-
-```typescript
-interface SendResponse {
- choices: {
- index: number;
- message: {
- role: string;
- content: string | null;
- tool_calls?: ToolCall[];
- };
- finish_reason: string | null;
- }[];
- usage?: {
- prompt_tokens: number;
- completion_tokens: number;
- total_tokens: number;
- };
-}
-```
-
-### Accessing Response Data
-
-```typescript
-const response = await edgee.send({
- model: 'gpt-4o',
- input: 'Hello!',
-});
-
-// Get the first choice's content
-const content = response.choices[0].message.content;
-
-// Check finish reason
-const finishReason = response.choices[0].finish_reason; // 'stop', 'length', 'tool_calls', etc.
-
-// Access token usage
-if (response.usage) {
- console.log(`Tokens used: ${response.usage.total_tokens}`);
- console.log(`Prompt tokens: ${response.usage.prompt_tokens}`);
- console.log(`Completion tokens: ${response.usage.completion_tokens}`);
-}
-```
-
-## Type Definitions
-
-The SDK exports TypeScript types for all request and response objects:
-
-```typescript
-import Edgee, {
- type Message,
- type Tool,
- type ToolChoice,
- type SendOptions,
- type SendResponse,
- type StreamChunk,
- type EdgeeConfig,
-} from 'edgee';
-```
-
-### Message Types
-
-```typescript
-interface Message {
- role: 'system' | 'user' | 'assistant' | 'tool';
- content?: string;
- name?: string;
- tool_calls?: ToolCall[];
- tool_call_id?: string;
-}
-```
-
-### Tool Types
-
-```typescript
-interface Tool {
- type: 'function';
- function: {
- name: string;
- description?: string;
- parameters?: Record;
- };
-}
-
-type ToolChoice =
- | 'none'
- | 'auto'
- | { type: 'function'; function: { name: string } };
-```
-
-## Error Handling
-
-The SDK throws errors for common issues:
-
-```typescript
-import Edgee from 'edgee';
-
-try {
- const edgee = new Edgee(); // Throws if EDGEE_API_KEY is not set
-} catch (error) {
- console.error('Configuration error:', error.message);
-}
-
-try {
- const response = await edgee.send({
- model: 'gpt-4o',
- input: 'Hello!',
- });
-} catch (error) {
- console.error('Request failed:', error);
- // Handle API errors, network errors, etc.
-}
-```
-
## What's Next?
-
-
- Explore the full REST API documentation.
-
-
- Browse 200+ models available through Edgee.
-
-
- Learn about intelligent routing, observability, and privacy controls.
-
-
- Get started with Edgee in minutes.
-
-
\ No newline at end of file
+- **[Configuration](/sdk/typescript/configuration)** - Learn how to configure and instantiate the SDK
+- **[Send Method](/sdk/typescript/send)** - Complete guide to the `send()` method
+- **[Stream Method](/sdk/typescript/stream)** - Learn how to stream responses
+- **[Tools](/sdk/typescript/tools)** - Detailed guide to function calling
diff --git a/sdk/typescript/send.mdx b/sdk/typescript/send.mdx
new file mode 100644
index 0000000..ff8cf61
--- /dev/null
+++ b/sdk/typescript/send.mdx
@@ -0,0 +1,233 @@
+---
+title: TypeScript SDK - Send Method
+sidebarTitle: Send
+description: Complete guide to the send() method in the TypeScript SDK.
+icon: send
+---
+
+The `send()` method is used to make non-streaming chat completion requests to the Edgee AI Gateway. It returns a `Promise` with the model's response.
+
+## Arguments
+
+The `send()` method accepts a single `SendOptions` object with the following properties:
+
+| Property | Type | Description |
+|----------|------|---------|
+| `model` | `string` | The model identifier to use (e.g., `"openai/gpt-4o"`) |
+| `input` | `string \| InputObject` | The input for the completion. Can be a simple string or a structured `InputObject` |
+
+
+### Input Types
+
+#### String Input
+
+When `input` is a string, it's automatically converted to a user message:
+
+
+```typescript
+const response = await edgee.send({
+ model: 'gpt-4o',
+ input: 'What is the capital of France?'
+});
+
+// Equivalent to: input: { messages: [{ role: 'user', content: 'What is the capital of France?' }] }
+console.log(response.text);
+// "The capital of France is Paris."
+```
+
+#### InputObject
+
+When `input` is an `InputObject`, you have full control over the conversation:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `messages` | `Message[]` | Array of conversation messages |
+| `tools` | `Tool[]` | Array of function tools available to the model |
+| `tool_choice` | `ToolChoice` | Controls which tool (if any) the model should call. See [Tools documentation](/sdk/typescript/tools) for details |
+
+**Example with InputObject:**
+
+```typescript
+const response = await edgee.send({
+ model: 'gpt-4o',
+ input: {
+ messages: [
+ { role: 'user', content: 'What is 2+2?' }
+ ]
+ }
+});
+
+console.log(response.text);
+// "2+2 equals 4."
+```
+
+### Message Object
+
+Each message in the `messages` array has the following structure:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `role` | `string` | The role of the message sender: `"system"`, `"developer"`, `"user"`, `"assistant"`, or `"tool"` |
+| `content` | `string` | The message content. Required for `system`, `user`, `tool` and `developer` roles. Optional for `assistant` when `tool_calls` is present |
+| `name` | `string` | Optional name for the message sender |
+| `tool_calls` | `ToolCall[]` | Array of tool calls made by the assistant. Only present in `assistant` messages |
+| `tool_call_id` | `string` | ID of the tool call this message is responding to. Required for `tool` role messages |
+
+### Message Roles
+
+- **`system`**: System instructions that set the behavior of the assistant
+- **`developer`**: Instructions provided by the application developer, prioritized ahead of user messages.
+- **`user`**: Instructions provided by an end user.
+- **`assistant`**: Assistant responses (can include `tool_calls`)
+- **`tool`**: Results from tool/function calls (requires `tool_call_id`)
+
+**Example - System and User Messages:**
+
+```typescript
+const response = await edgee.send({
+ model: 'gpt-4o',
+ input: {
+ messages: [
+ { role: 'system', content: 'You are a helpful assistant.' },
+ { role: 'user', content: 'What is 2+2?' },
+ { role: 'assistant', content: '2+2 equals 4.' },
+ { role: 'user', content: 'What about 3+3?' }
+ ]
+ }
+});
+
+console.log(response.text);
+// "3+3 equals 6."
+```
+
+For complete tool calling examples and best practices, see [Tools documentation](/sdk/typescript/tools).
+
+## Return Value
+
+The `send()` method returns a `Promise` with the following structure:
+
+### SendResponse Object
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `choices` | `Choice[]` | Array of completion choices (typically one) |
+| `usage` | `Usage \| undefined` | Token usage information (if provided by the API) |
+
+### Choice Object
+
+Each choice in the `choices` array contains:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `index` | `number` | The index of this choice in the array |
+| `message` | `Message` | The assistant's message response |
+| `finish_reason` | `string \| null` | Reason why the generation stopped. Possible values: `"stop"`, `"length"`, `"tool_calls"`, `"content_filter"`, or `null` |
+
+**Example - Handling Multiple Choices:**
+
+```typescript
+const response = await edgee.send({
+ model: 'gpt-4o',
+ input: 'Give me a creative idea.'
+});
+
+// Process all choices
+response.choices.forEach((choice, index) => {
+ console.log(`Choice ${index}:`, choice.message.content);
+ console.log(`Finish reason: ${choice.finish_reason}`);
+});
+```
+
+### Message Object (in Response)
+
+The `message` in each choice has:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `role` | `string` | The role of the message (typically `"assistant"`) |
+| `content` | `string \| null` | The text content of the response. `null` when `tool_calls` is present |
+| `tool_calls` | `ToolCall[] \| undefined` | Array of tool calls requested by the model (if any). See [Tools documentation](/sdk/typescript/tools) for details |
+
+### Usage Object
+
+Token usage information (when available):
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `prompt_tokens` | `number` | Number of tokens in the prompt |
+| `completion_tokens` | `number` | Number of tokens in the completion |
+| `total_tokens` | `number` | Total tokens used (prompt + completion) |
+
+**Example - Accessing Token Usage:**
+
+```typescript
+const response = await edgee.send({
+ model: 'gpt-4o',
+ input: 'Explain quantum computing briefly.'
+});
+
+if (response.usage) {
+ console.log(`Prompt tokens: ${response.usage.prompt_tokens}`);
+ console.log(`Completion tokens: ${response.usage.completion_tokens}`);
+ console.log(`Total tokens: ${response.usage.total_tokens}`);
+}
+```
+
+## Convenience Properties
+
+The `SendResponse` class provides convenience getters for easier access:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `text` | `string \| null` | Shortcut to `choices[0].message.content` |
+| `message` | `Message \| null` | Shortcut to `choices[0].message` |
+| `finishReason` | `string \| null` | Shortcut to `choices[0].finish_reason` |
+| `toolCalls` | `ToolCall[] \| null` | Shortcut to `choices[0].message.tool_calls` |
+
+**Example - Using Convenience Properties:**
+
+```typescript
+const response = await edgee.send({
+ model: 'gpt-4o',
+ input: 'Hello!'
+});
+
+// Instead of: response.choices[0].message.content
+console.log(response.text);
+
+// Instead of: response.choices[0].message
+console.log(response.message);
+
+// Instead of: response.choices[0].finish_reason
+console.log(response.finishReason);
+
+// Instead of: response.choices[0].message.tool_calls
+if (response.toolCalls) {
+ console.log('Tool calls:', response.toolCalls);
+}
+```
+
+## Error Handling
+
+The `send()` method can throw errors in several scenarios:
+
+```typescript
+try {
+ const response = await edgee.send({
+ model: 'gpt-4o',
+ input: 'Hello!'
+ });
+} catch (error) {
+ if (error instanceof Error) {
+ // API errors: "API error {status}: {message}"
+ // Network errors: Standard fetch errors
+ console.error('Request failed:', error.message);
+ }
+}
+```
+
+### Common Errors
+
+- **API errors**: `Error: API error {status}: {message}` - The API returned an error status
+- **Network errors**: Standard fetch network errors
+- **Invalid input**: Errors from invalid request structure
diff --git a/sdk/typescript/stream.mdx b/sdk/typescript/stream.mdx
new file mode 100644
index 0000000..93a00d0
--- /dev/null
+++ b/sdk/typescript/stream.mdx
@@ -0,0 +1,213 @@
+---
+title: TypeScript SDK - Stream Method
+sidebarTitle: Stream
+description: Complete guide to the stream() method in the TypeScript SDK.
+icon: square-stack
+---
+
+The `stream()` method is used to make streaming chat completion requests to the Edgee AI Gateway. It returns an `AsyncGenerator` that yields response chunks as they arrive from the API.
+
+## Arguments
+
+The `stream()` method accepts two arguments:
+
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| `model` | `string` | The model identifier to use (e.g., `"openai/gpt-4o"`) |
+| `input` | `string \| InputObject` | The input for the completion. Can be a simple string or a structured `InputObject` |
+
+### Input Types
+
+#### String Input
+
+When `input` is a string, it's automatically converted to a user message:
+
+```typescript
+for await (const chunk of edgee.stream('gpt-4o', 'Tell me a story')) {
+ if (chunk.text) {
+ process.stdout.write(chunk.text);
+ }
+
+ if (chunk.finishReason) {
+ console.log(`\nFinished: ${chunk.finishReason}`);
+ }
+}
+// Equivalent to: input: { messages: [{ role: 'user', content: 'Tell me a story' }] }
+```
+
+#### InputObject
+
+When `input` is an `InputObject`, you have full control over the conversation:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `messages` | `Message[]` | Array of conversation messages |
+| `tools` | `Tool[]` | Array of function tools available to the model |
+| `tool_choice` | `ToolChoice` | Controls which tool (if any) the model should call. See [Tools documentation](/sdk/typescript/tools) for details |
+
+For details about `Message` type, see the [Send Method documentation](/sdk/typescript/send#message-object).
+For details about `Tool` and `ToolChoice` types, see the [Tools documentation](/sdk/typescript/tools).
+
+
+**Example - Streaming with Messages:**
+
+```typescript
+for await (const chunk of edgee.stream('gpt-4o', {
+ messages: [
+ { role: 'system', content: 'You are a helpful assistant.' },
+ { role: 'user', content: 'Write a poem about coding' }
+ ]
+})) {
+ if (chunk.text) {
+ process.stdout.write(chunk.text);
+ }
+}
+```
+
+## Return Value
+
+The `stream()` method returns an `AsyncGenerator`. Each chunk contains incremental updates to the response.
+
+### StreamChunk Object
+
+Each chunk yielded by the generator has the following structure:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `choices` | `StreamChoice[]` | Array of streaming choices (typically one) |
+
+### StreamChoice Object
+
+Each choice in the `choices` array contains:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `index` | `number` | The index of this choice in the array |
+| `delta` | `StreamDelta` | The incremental update to the message |
+| `finish_reason` | `string \| null \| undefined` | Reason why the generation stopped. Only present in the final chunk. Possible values: `"stop"`, `"length"`, `"tool_calls"`, `"content_filter"`, or `null` |
+
+**Example - Handling Multiple Choices:**
+
+```typescript
+for await (const chunk of edgee.stream('gpt-4o', 'Give me creative ideas')) {
+ chunk.choices.forEach((choice, index) => {
+ if (choice.delta.content) {
+ console.log(`Choice ${index}: ${choice.delta.content}`);
+ }
+ });
+}
+```
+
+### StreamDelta Object
+
+The `delta` object contains incremental updates:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `role` | `string \| undefined` | The role of the message (typically `"assistant"`). Only present in the **first chunk** |
+| `content` | `string \| undefined` | Incremental text content. Each chunk contains a portion of the full response |
+| `tool_calls` | `ToolCall[] \| undefined` | Array of tool calls (if any). See [Tools documentation](/sdk/typescript/tools) for details |
+
+## Convenience Properties
+
+The `StreamChunk` class provides convenience getters for easier access:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `text` | `string \| null` | Shortcut to `choices[0].delta.content` - the incremental text content |
+| `role` | `string \| null` | Shortcut to `choices[0].delta.role` - the message role (first chunk only) |
+| `finishReason` | `string \| null` | Shortcut to `choices[0].finish_reason` - the finish reason (final chunk only) |
+
+**Example - Using Convenience Properties:**
+
+```typescript
+for await (const chunk of edgee.stream('gpt-4o', 'Explain quantum computing')) {
+ // Content chunks
+ if (chunk.text) {
+ process.stdout.write(chunk.text);
+ }
+
+ // First chunk contains the role
+ if (chunk.role) {
+ console.log(`Role: ${chunk.role}`);
+ }
+
+ // Last chunk contains finish reason
+ if (chunk.finishReason) {
+ console.log(`\nFinish reason: ${chunk.finishReason}`);
+ }
+}
+```
+
+## Understanding Streaming Behavior
+
+### Chunk Structure
+
+1. **First chunk**: Contains `role` (typically `"assistant"`) and may contain initial `content`
+2. **Content chunks**: Contain incremental `content` updates
+3. **Final chunk**: Contains `finish_reason` indicating why generation stopped
+
+**Example - Collecting Full Response:**
+
+```typescript
+let fullText = '';
+
+for await (const chunk of edgee.stream('gpt-4o', 'Tell me a story')) {
+ if (chunk.text) {
+ fullText += chunk.text;
+ process.stdout.write(chunk.text); // Also display as it streams
+ }
+}
+
+console.log(`\n\nFull response (${fullText.length} characters):`);
+console.log(fullText);
+```
+
+
+### Finish Reasons
+
+| Value | Description |
+|-------|-------------|
+| `"stop"` | Model generated a complete response and stopped naturally |
+| `"length"` | Response was cut off due to token limit |
+| `"tool_calls"` | Model requested tool/function calls |
+| `"content_filter"` | Content was filtered by safety systems |
+| `null` | Generation is still in progress (not the final chunk) |
+
+### Empty Chunks
+
+Some chunks may not contain `content`. This is normal and can happen when:
+- The chunk only contains metadata (role, finish_reason)
+- The chunk is part of tool call processing
+- Network buffering creates empty chunks
+
+Always check for `chunk.text` before using it:
+
+```typescript
+for await (const chunk of edgee.stream('gpt-4o', 'Hello')) {
+ if (chunk.text) { // ✅ Good: Check before using
+ console.log(chunk.text);
+ }
+ // ❌ Bad: console.log(chunk.text) - may log null
+}
+```
+
+## Error Handling
+
+The `stream()` method can throw errors:
+
+```typescript
+try {
+ for await (const chunk of edgee.stream('gpt-4o', 'Hello!')) {
+ if (chunk.text) {
+ process.stdout.write(chunk.text);
+ }
+ }
+} catch (error) {
+ if (error instanceof Error) {
+ // API errors: "API error {status}: {message}"
+ // Network errors: Standard fetch errors
+ console.error('Stream failed:', error.message);
+ }
+}
+```
diff --git a/sdk/typescript/tools.mdx b/sdk/typescript/tools.mdx
new file mode 100644
index 0000000..773c2a4
--- /dev/null
+++ b/sdk/typescript/tools.mdx
@@ -0,0 +1,499 @@
+---
+title: TypeScript SDK - Tools (Function Calling)
+sidebarTitle: Tools
+description: Complete guide to function calling with the TypeScript SDK.
+icon: square-function
+---
+
+The Edgee TypeScript SDK supports OpenAI-compatible function calling (tools), allowing models to request execution of functions you define. This enables models to interact with external APIs, databases, and your application logic.
+
+## Overview
+
+Function calling works in two steps:
+
+1. **Request**: Send a request with tool definitions. The model may request to call one or more tools.
+2. **Execute & Respond**: Execute the requested functions and send the results back to the model.
+
+## Tool Definition
+
+A tool is defined using the `Tool` interface:
+
+```typescript
+interface Tool {
+ type: "function";
+ function: FunctionDefinition;
+}
+```
+
+### FunctionDefinition
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `name` | `string` | The name of the function (must be unique, a-z, A-Z, 0-9, _, -) |
+| `description` | `string` | Description of what the function does. **Highly recommended** - helps the model understand when to use it |
+| `parameters` | `Record` | JSON Schema object describing the function parameters |
+
+### Parameters Schema
+
+The `parameters` field uses JSON Schema format:
+
+```typescript
+{
+ type: "object",
+ properties: {
+ paramName: {
+ type: "string" | "number" | "boolean" | "object" | "array",
+ description: "Parameter description"
+ }
+ },
+ required: ["paramName"] // Array of required parameter names
+}
+```
+
+**Example - Defining a Tool:**
+
+```typescript
+const response = await edgee.send({
+ model: 'gpt-4o',
+ input: {
+ messages: [
+ { role: 'user', content: 'What is the weather in Paris?' }
+ ],
+ tools: [
+ {
+ type: 'function',
+ function: {
+ name: 'get_weather',
+ description: 'Get the current weather for a location',
+ parameters: {
+ type: 'object',
+ properties: {
+ location: {
+ type: 'string',
+ description: 'The city and state, e.g. San Francisco, CA'
+ },
+ unit: {
+ type: 'string',
+ enum: ['celsius', 'fahrenheit'],
+ description: 'Temperature unit'
+ }
+ },
+ required: ['location']
+ }
+ }
+ }
+ ],
+ tool_choice: 'auto'
+ }
+});
+```
+
+## Tool Choice
+
+The `tool_choice` parameter controls when and which tools the model should call:
+
+| Value | Type | Description |
+|-------|------|-------------|
+| `"auto"` | `string` | Let the model decide whether to call tools (default) |
+| `"none"` | `string` | Don't call any tools, even if provided |
+| `{ type: "function", function: { name: "function_name" } }` | `object` | Force the model to call a specific function |
+
+### ToolChoice Type
+
+```typescript
+type ToolChoice =
+ | "none"
+ | "auto"
+ | { type: "function"; function: { name: string } };
+```
+
+**Example - Force a Specific Tool:**
+
+```typescript
+const response = await edgee.send({
+ model: 'gpt-4o',
+ input: {
+ messages: [
+ { role: 'user', content: 'What is the weather?' }
+ ],
+ tools: [
+ {
+ type: 'function',
+ function: {
+ name: 'get_weather',
+ description: 'Get the current weather',
+ parameters: { /* ... */ }
+ }
+ }
+ ],
+ tool_choice: {
+ type: 'function',
+ function: { name: 'get_weather' }
+ }
+ }
+});
+// Model will always call get_weather
+```
+
+**Example - Disable Tool Calls:**
+
+```typescript
+const response = await edgee.send({
+ model: 'gpt-4o',
+ input: {
+ messages: [
+ { role: 'user', content: 'What is the weather?' }
+ ],
+ tools: [
+ {
+ type: 'function',
+ function: {
+ name: 'get_weather',
+ description: 'Get the current weather',
+ parameters: { /* ... */ }
+ }
+ }
+ ],
+ tool_choice: 'none'
+ }
+});
+// Model will not call tools, even though they're available
+```
+
+## Tool Call Object Structure
+
+When the model requests a tool call, you receive a `ToolCall` object:
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `id` | `string` | Unique identifier for this tool call |
+| `type` | `string` | Type of tool call (typically `"function"`) |
+| `function` | `object` | Function call details |
+| `function.name` | `string` | Name of the function to call |
+| `function.arguments` | `string` | JSON string containing the function arguments |
+
+### Parsing Arguments
+
+```typescript
+const toolCall = response.toolCalls[0];
+const args = JSON.parse(toolCall.function.arguments);
+// args is now a JavaScript object
+console.log(args.location); // e.g., "Paris"
+```
+
+## Complete Example
+
+Here's a complete end-to-end example with error handling:
+
+```typescript
+import Edgee from 'edgee';
+
+const edgee = new Edgee('your-api-key');
+
+// Define the weather function
+async function getWeather(location: string, unit: string = 'celsius') {
+ // Simulate API call
+ return {
+ location,
+ temperature: 15,
+ unit,
+ condition: 'sunny'
+ };
+}
+
+// Step 1: Initial request with tools
+const response1 = await edgee.send({
+ model: 'gpt-4o',
+ input: {
+ messages: [
+ { role: 'user', content: 'What is the weather in Paris and Tokyo?' }
+ ],
+ tools: [
+ {
+ type: 'function',
+ function: {
+ name: 'get_weather',
+ description: 'Get the current weather for a location',
+ parameters: {
+ type: 'object',
+ properties: {
+ location: {
+ type: 'string',
+ description: 'The city name'
+ },
+ unit: {
+ type: 'string',
+ enum: ['celsius', 'fahrenheit'],
+ description: 'Temperature unit'
+ }
+ },
+ required: ['location']
+ }
+ }
+ }
+ ],
+ tool_choice: 'auto'
+ }
+});
+
+// Step 2: Execute all tool calls
+const messages = [
+ { role: 'user', content: 'What is the weather in Paris and Tokyo?' },
+ response1.message! // Include assistant's message
+];
+
+if (response1.toolCalls) {
+ for (const toolCall of response1.toolCalls) {
+ const args = JSON.parse(toolCall.function.arguments);
+ const result = await getWeather(args.location, args.unit);
+
+ messages.push({
+ role: 'tool',
+ tool_call_id: toolCall.id,
+ content: JSON.stringify(result)
+ });
+ }
+}
+
+// Step 3: Send results back
+const response2 = await edgee.send({
+ model: 'gpt-4o',
+ input: {
+ messages,
+ tools: [
+ // Keep tools available for follow-up
+ {
+ type: 'function',
+ function: {
+ name: 'get_weather',
+ description: 'Get the current weather for a location',
+ parameters: {
+ type: 'object',
+ properties: {
+ location: { type: 'string', description: 'The city name' },
+ unit: { type: 'string', enum: ['celsius', 'fahrenheit'] }
+ },
+ required: ['location']
+ }
+ }
+ }
+ ]
+ }
+});
+
+console.log(response2.text);
+```
+
+**Example - Multiple Tools:**
+
+You can provide multiple tools and let the model choose which ones to call:
+
+You can provide multiple tools and let the model choose:
+
+```typescript
+const response = await edgee.send({
+ model: 'gpt-4o',
+ input: {
+ messages: [
+ { role: 'user', content: 'Get the weather in Paris and send an email about it' }
+ ],
+ tools: [
+ {
+ type: 'function',
+ function: {
+ name: 'get_weather',
+ description: 'Get the current weather for a location',
+ parameters: {
+ type: 'object',
+ properties: {
+ location: { type: 'string', description: 'City name' }
+ },
+ required: ['location']
+ }
+ }
+ },
+ {
+ type: 'function',
+ function: {
+ name: 'send_email',
+ description: 'Send an email to a recipient',
+ parameters: {
+ type: 'object',
+ properties: {
+ to: { type: 'string', description: 'Recipient email address' },
+ subject: { type: 'string', description: 'Email subject' },
+ body: { type: 'string', description: 'Email body' }
+ },
+ required: ['to', 'subject', 'body']
+ }
+ }
+ }
+ ],
+ tool_choice: 'auto'
+ }
+});
+```
+
+## Streaming with Tools
+
+The `stream()` method also supports tools. For details about streaming, see the [Stream Method documentation](/sdk/typescript/stream).
+
+```typescript
+for await (const chunk of edgee.stream('gpt-4o', {
+ messages: [
+ { role: 'user', content: 'What is the weather in Paris?' }
+ ],
+ tools: [
+ {
+ type: 'function',
+ function: {
+ name: 'get_weather',
+ description: 'Get the current weather for a location',
+ parameters: {
+ type: 'object',
+ properties: {
+ location: { type: 'string', description: 'City name' }
+ },
+ required: ['location']
+ }
+ }
+ }
+ ],
+ tool_choice: 'auto'
+})) {
+ if (chunk.text) {
+ process.stdout.write(chunk.text);
+ }
+
+ // Check for tool calls in the delta
+ const toolCalls = chunk.choices[0]?.delta?.tool_calls;
+ if (toolCalls) {
+ console.log('\nTool calls detected:', toolCalls);
+ }
+
+ if (chunk.finishReason === 'tool_calls') {
+ console.log('\nModel requested tool calls');
+ }
+}
+```
+
+## Best Practices
+
+### 1. Always Provide Descriptions
+
+Descriptions help the model understand when to use each function:
+
+```typescript
+// ✅ Good
+{
+ name: 'get_weather',
+ description: 'Get the current weather conditions for a specific location',
+ parameters: { /* ... */ }
+}
+
+// ❌ Bad
+{
+ name: 'get_weather',
+ // Missing description
+ parameters: { /* ... */ }
+}
+```
+
+### 2. Use Clear Parameter Names
+
+```typescript
+// ✅ Good
+properties: {
+ location: { type: 'string', description: 'The city name' }
+}
+
+// ❌ Bad
+properties: {
+ loc: { type: 'string' } // Unclear name, no description
+}
+```
+
+### 3. Mark Required Parameters
+
+```typescript
+parameters: {
+ type: 'object',
+ properties: {
+ location: { type: 'string', description: 'City name' },
+ unit: { type: 'string', description: 'Temperature unit' }
+ },
+ required: ['location'] // location is required, unit is optional
+}
+```
+
+### 4. Handle Multiple Tool Calls
+
+Models can request multiple tool calls in a single response. Use `Promise.all()` to execute them in parallel:
+
+```typescript
+if (response.toolCalls && response.toolCalls.length > 0) {
+ const results = await Promise.all(
+ response.toolCalls.map(async (toolCall) => {
+ const args = JSON.parse(toolCall.function.arguments);
+ const result = await executeFunction(toolCall.function.name, args);
+ return {
+ tool_call_id: toolCall.id,
+ result
+ };
+ })
+ );
+
+ // Add all tool results to messages
+ const toolMessages = results.map(({ tool_call_id, result }) => ({
+ role: 'tool' as const,
+ tool_call_id,
+ content: JSON.stringify(result)
+ }));
+
+ messages.push(...toolMessages);
+}
+```
+
+### 5. Error Handling in Tool Execution
+
+```typescript
+if (response.toolCalls) {
+ for (const toolCall of response.toolCalls) {
+ try {
+ const args = JSON.parse(toolCall.function.arguments);
+ const result = await executeFunction(toolCall.function.name, args);
+
+ messages.push({
+ role: 'tool',
+ tool_call_id: toolCall.id,
+ content: JSON.stringify(result)
+ });
+ } catch (error) {
+ // Send error back to model
+ messages.push({
+ role: 'tool',
+ tool_call_id: toolCall.id,
+ content: JSON.stringify({ error: error.message })
+ });
+ }
+ }
+}
+```
+
+### 6. Keep Tools Available
+
+Include tools in follow-up requests so the model can call them again if needed:
+
+```typescript
+const response2 = await edgee.send({
+ model: 'gpt-4o',
+ input: {
+ messages: [...messagesWithToolResults],
+ tools: [
+ // Keep the same tools available
+ { type: 'function', function: { /* ... */ } }
+ ]
+ }
+});
+```
+