yoheinakajima / babyagi

MIT License
19.9k stars 2.61k forks source link

LLama required? #226

Closed cvarrichio closed 1 year ago

cvarrichio commented 1 year ago

Am I confused, because it seems like Llama is now required. This is a massive increase in system requirements. Previously since everything was handled through the APIs, you could run this on a EC2 Micro instance or a work laptop. Now I'm finding that it's installing CUDA, several gigabytes worth of installs, etc? I understand why some people would want to use LLama, but shouldn't this clearly be optional?

dancingmadkefka commented 1 year ago

Ya I don't understand that either tbh

micromysore commented 1 year ago

The default is still openAI models. ie. Llama is not required, unless you explicitly ask for it in the command line while invoking the babyagi.

dancingmadkefka commented 1 year ago

the issue is with the installing of all that stuff, not that we are forced to use it.

cvarrichio commented 1 year ago

Absolutely. Running llama is irrelvant, being forced to install it plus its dependencies is a dealbreaker.

francip commented 1 year ago

Yeah, my bad. This is now fixed, llama-cpp bindings are in extensions/requirements.txt.