Closed drhboss closed 4 months ago
Hey @drhboss, butterfish defaults to using an openai model and sends that model in the request. You can override this with the -m
flag, for example butterfish shell -m llama-3
. That model string will have to match what ollama is serving. Please give that a try!
I'm trying to run butterfish with a local llm (ollama) and keep getting this error about not having gpt-4-turbo, is there any way to use it with llama/codellama or any other model served through ollama?
$butterfish prompt -vvv -u "http://localhost:11434/v1" "Is this thing working?"
and get this error:Error: error, status code: 404, message: model 'gpt-4-turbo' not found, try pulling it first