Open juangea opened 1 year ago
BabyAGI and Local LLMs do seem to be a good match! I'd love to support it.
I've seen an open-source project called react-llm. https://github.com/r2d4/react-llm
I have limited knowledge about Local LLMs, but would implementing this help you achieve your goals? I would appreciate it if you could let me know.
It looks interesting, but I'm not sure how performant it could be, however in that case the GPU executing the AI would be the local GPU, if I want to share the AI with some computers in my network I'm not sure how that could work.
Out of curiosity, why not integrate it with Oobabooga as an extension?
if I want to share the AI with some computers in my network I'm not sure how that could work.
I see, so there is such a use case. It may be difficult to cover such cases from the outset.
There are gpu enabled options. Paying "open"ai for every query is not economically feesable for non corporations. LocalAI is a non gpu enabled option that allows for the feature to be added but cpu with llms is like playing counter strike on dialup. So don't bother with non gpu implementations.
BabyAGI and Local LLMs do seem to be a good match! I'd love to support it.
I've seen an open-source project called react-llm. https://github.com/r2d4/react-llm
I have limited knowledge about Local LLMs, but would implementing this help you achieve your goals? I would appreciate it if you could let me know.
Can we already use local LLMs in BabyAGI, or it will avaiable later, or never?
That's the question, I can't use OpenAI and I would love to run this BabyAGI over the GPU in my local computer with some models like WizardLM or Gpt4-x-Vicuna, both quantized.
Do you plan to make a local version of this?
Thanks for this!