Closed ishaan-jaff closed 1 year ago
@minimaxir can you please take a look at this PR when possible😊 Happy to add docs/tests too if this initial commit looks good
Looking into this now: it'll take a bit longer since it's adding a dependency.
After some investigation into LiteLLM, I will have to reject adding it despite the high demand for alternative services for a number of reasons:
The design intent of simpleaichat is to be very clear, transparent, and consistent, even in its codebase.
@minimaxir thanks for the feedback
Your implementations for pinging the API and ETLing the I/O are inefficient and would result in a slowdown for simpleaichat. Additionally, it's not clear if your hacks used to interface with non-ChatGPT APIs are optimal.
Was there something in particular that made it seem like it would result in a slowdown and sub-optimal results ?
Your demos notebooks have undocumented behavior of automatically creating a dashboard, which is a complete nonstarter.
Thanks for pointing that out - it was an experimental feature that users can opt in to. We will clean that out
Was there something in particular that made it seem like it would result in a slowdown and sub-optimal results ?
More in general for optimization (e.g. minimizing serialization overhead, minimizing HTTP session creation).
This PR adds support for models from all the above mentioned providers using https://github.com/BerriAI/litellm/
Here's a sample of how it's used: