srush / MiniChain

A tiny library for coding with large language models.
https://srush-minichain.hf.space/
MIT License
1.21k stars 74 forks source link

Why not call OpenAI? #22

Open krrishdholakia opened 1 year ago

krrishdholakia commented 1 year ago

Hey @srush, saw that you're using manifest for making OpenAI/LLM calls vs. calling it yourself - why is that?

https://github.com/srush/MiniChain/blob/b79ebc51bdedb836c9265eec2fcc21cd60b17327/minichain/backend.py#L206

context: I'm working on LiteLLM an abstraction library to simplify LLM API calls

srush commented 1 year ago

No good reason, I think it didn't when I wrote this.

Would love to remove the LLM API and caching layer entirely from minichain if possible. If your library does that, I would switch.

krrishdholakia commented 1 year ago

that's great! We already do handle the llm api calling - including streaming, i'm assuming that's the one you're trying to replace? Will make a PR for it

Last q - why don't you want to manage this yourself?

srush commented 1 year ago

Last q - why don't you want to manage this yourself?

Minichain is currently 2 files but I would like it to be 1 file :1st_place_medal:

More though because openai keeps changing up the API, and it would be nice not to have to worry about that.

srush commented 1 year ago

The main features I need are call, embeds, streaming, and stop keywords. In theory might be nice to have caching and some of the more advanced features like Function Call API. Finally, would love to eventually support offline models through something like TGI.