Open superchargez opened 1 year ago
I've heard of a few organisations successfully making the switch from Pinecone to Weaviate, which is free and open source and can even be embedded. Seems ideal for a purely local installation, but I'm not sure if it's suitable for babyagi.
I have been working with Weaviate and have been using in our product. I can help in this issue
I have been working with Weaviate and have been using in our product. I can help in this issue
I'm also interested in using open-source & free alternatives! If possible, make it run in colab
I have been working with Weaviate and have been using in our product. I can help in this issue
Can you give any insight on Weaviate vs Chroma? I've been playing with the it hoping it had enough features for simple memory.
I have been working with Weaviate and have been using in our product. I can help in this issue
You could be doing a lot of people (including myself) a TREMENDOUS favor by making a PR with an implementation of this.
BabyAGI supports local LLAMA, but that does require rather beefy computer. If you want to extend the code to instead call hosted LLAMA instances, that would be great.
As for replacing Pinecone - there are couple of folks (@atroyn and @kyleshrader) working on replacing it with Weaviate or Chroma. They have a draft PR (https://github.com/yoheinakajima/babyagi/pull/141) you can join the conversation at.
I have been working with Weaviate and have been using in our product. I can help in this issue
This is awesome!
CC: @hsm207 note: https://github.com/yoheinakajima/babyagi/pull/141/files
you can replace chat easily with https://github.com/keldenl/gpt-llama.cpp
embeddings on the other hand.. i wish there was a in-built way for local embeddings and sentence-transformers so we could avoid using openai
Is there a way to use Open-Assistant or ChatGPT or to run LLaMA in google colab?
And free alternative to Pinecone like FAISS running locally or in google colab?