miurla / babyagi-ui

BabyAGI UI is designed to make it easier to run and develop with babyagi in a web app, like a ChatGPT.
https://babyagi-ui.vercel.app
MIT License
1.31k stars 278 forks source link

Do you plan to make it work with local GPU LLMs like quantized WizardLM? #34

Open juangea opened 1 year ago

juangea commented 1 year ago

That's the question, I can't use OpenAI and I would love to run this BabyAGI over the GPU in my local computer with some models like WizardLM or Gpt4-x-Vicuna, both quantized.

Do you plan to make a local version of this?

Thanks for this!

miurla commented 1 year ago

BabyAGI and Local LLMs do seem to be a good match! I'd love to support it.

I've seen an open-source project called react-llm. https://github.com/r2d4/react-llm

I have limited knowledge about Local LLMs, but would implementing this help you achieve your goals? I would appreciate it if you could let me know.

juangea commented 1 year ago

It looks interesting, but I'm not sure how performant it could be, however in that case the GPU executing the AI would be the local GPU, if I want to share the AI with some computers in my network I'm not sure how that could work.

Out of curiosity, why not integrate it with Oobabooga as an extension?

miurla commented 1 year ago

if I want to share the AI with some computers in my network I'm not sure how that could work.

I see, so there is such a use case. It may be difficult to cover such cases from the outset.

orophix commented 1 year ago

There are gpu enabled options. Paying "open"ai for every query is not economically feesable for non corporations. LocalAI is a non gpu enabled option that allows for the feature to be added but cpu with llms is like playing counter strike on dialup. So don't bother with non gpu implementations.

jonarser commented 2 months ago

BabyAGI and Local LLMs do seem to be a good match! I'd love to support it.

I've seen an open-source project called react-llm. https://github.com/r2d4/react-llm

I have limited knowledge about Local LLMs, but would implementing this help you achieve your goals? I would appreciate it if you could let me know.

Can we already use local LLMs in BabyAGI, or it will avaiable later, or never?