-
Would like to see local LLMS like Alpaca and Llama.
-
I am running babyAGI with llama.cpp, and being slower than other models, waiting a long time without seeing what it is doing under the AI. I can only see that my CPU is being used 100%.
I would lik…
-
1. How flexible is it with model file format compared to Oobabooga regarding GPTQ vs GGUF vs regular model file?
2. Can this read documents like GPT4All or Khoj or PrivateGPT (and maybe others)?
3. …
-
### Required prerequisites
- [X] I have searched the [Issue Tracker](https://github.com/camel-ai/camel/issues) and [Discussions](https://github.com/camel-ai/camel/discussions) that this hasn't alre…
-
1. is it possible to connect this frontend to a langchain backend
various agents and chains can be created with langchain and this frontend looks perfect for a custom chatbot for xyz use case.
2. …
-
Hey, Ruben from [Aim](https://github.com/aimhubio/aim) here! 👋
First of all thanks for this awesome project!
After experimenting with babyagi I found that integrating Aim can be extremely helpfu…
-
-
Is there a way to use [Open-Assistant](https://open-assistant.io/chat) or [ChatGPT](https://chat.openai.com/chat) or to run LLaMA in google colab?
And free alternative to Pinecone like FAISS runnin…
-
I'm able to see the tasks on the terminal, but once my instance is closed all the tasks and outcomes are gone. How can I retrieve the output from previous sessions?
I sat the following .env variabl…
-
LMQL query with proper scripting (inside & outside query) could simulate a llm/gpt-based (semi) autonomous agent (e.g. Auto-GPT, BabyAGI). What could not be covered by LMQL ?
LMQL can handle intera…