Closed imchris1 closed 5 months ago
Yes we could use extra endpoints for local llms with ollama and Groq for speed, because it uses a lot of token to most of the time stop half way, there too much trial and error to be able to afford it. And I am not going to use it until this happens: once bitten twice shy. With the last version I ended up paying 20 dollars a pop for failing half way or getting stuck into a loop. Right now for instance, it doesn't have permission to activate the venv, it does create it though, why would that be? ?
no i host my own locally...i do not pay money to do this.
hey there, you will have some information here:
https://github.com/Pythagora-io/gpt-pilot/wiki/Using-GPT%E2%80%90Pilot-with-Local-LLMs
@imchris1 in my case Ollama-Pilot-CasaOs I'm Using docker container to run GPT Pilot to Ollama Api key If you like my Automation Leave a Start ⭐
Version
Command-line (Python) version
Suggestion
i don't use a Api key i use my model locally in docker with ollama how can this work with this.