diff --git a/README.md b/README.md index e2f533e..80b5430 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ AMRS builds on top of [async-openai](https://github.com/64bit/async-openai) to p ## Features -- **Endpoints Support** (only basic ones because of limited resources): +- **Endpoints Support**: - Chat Completions - Responses - More on the way @@ -35,7 +35,7 @@ Or add the following line to your Cargo.toml: ## How to Use -Here's a simple example with the Weighted Round Robin (WRR) routing mode. Before running the code, make sure to set your provider API key in the environment variable by running `export _API_KEY="your_provider_api_key"`. +Here's a simple example with the Weighted Round Robin (WRR) router mode. Before running the code, make sure to set your provider API key in the environment variable by running `export _API_KEY="your_provider_api_key"`. Here we use OpenAI as an example. @@ -49,7 +49,7 @@ use tokio::runtime::Runtime; fn main() { let config = client::Config::builder() .provider("openai") - .routing_mode(client::RouterMode::WRR) + .router_mode(client::RouterMode::WRR) .model( client::ModelConfig::builder() .name("gpt-3.5-turbo")