This has been mentioned in other issues in the past:
I wonder if it's worth implementing a wrapper/abstraction layer like LiteLLM to make things more flexible?
- https://github.com/BerriAI/litellm
-
Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
This is what projects like aider use:
- https://aider.chat/docs/llms/other.html
-
Aider uses the litellm package to connect to hundreds of other models. You can use aider --model <model-name> to use any supported model.
To explore the list of supported models you can run aider --models <model-name> with a partial model name. If the supplied name is not an exact match for a known model, aider will return a list of possible matching models.
Originally posted by @0xdevalias in #14 (comment)
And would help support issues such as the following in a more unified way:
I haven't looked too deeply into all of the options available in this space, but one I re-stumbled across again today that made me think to create this issue:
There's also this brief overview I did in a comment RE: wanting a JavaScript version of litellm:
And it may be interesting to look into options/services like this that aim to support choosing/optimising across different providers/models/etc:
- https://www.notdiamond.ai/
-
An end-to-end multi-model framework
-
Intelligent routing
Not Diamond can help you take any evaluation data for any set of models over any set of inputs and create an optimal routing algorithm tailored to your application.
-
Automatic prompt adaptation
Automatically adapt prompts across LLMs so you always call the right model with the right prompt. No more manual tweaking and experimentation.
- https://www.notdiamond.ai/pricing
See Also
This has been mentioned in other issues in the past:
And would help support issues such as the following in a more unified way:
I haven't looked too deeply into all of the options available in this space, but one I re-stumbled across again today that made me think to create this issue:
There's also this brief overview I did in a comment RE: wanting a JavaScript version of litellm:
And it may be interesting to look into options/services like this that aim to support choosing/optimising across different providers/models/etc:
See Also
humanify openai --baseURL#416humanify openai --baseURL#419