Closed cvarrichio closed 1 year ago
Ya I don't understand that either tbh
The default is still openAI models. ie. Llama is not required, unless you explicitly ask for it in the command line while invoking the babyagi.
the issue is with the installing of all that stuff, not that we are forced to use it.
Absolutely. Running llama is irrelvant, being forced to install it plus its dependencies is a dealbreaker.
Yeah, my bad. This is now fixed, llama-cpp bindings are in extensions/requirements.txt.
Am I confused, because it seems like Llama is now required. This is a massive increase in system requirements. Previously since everything was handled through the APIs, you could run this on a EC2 Micro instance or a work laptop. Now I'm finding that it's installing CUDA, several gigabytes worth of installs, etc? I understand why some people would want to use LLama, but shouldn't this clearly be optional?