bakks / butterfish

A shell with AI superpowers
https://butterfi.sh
MIT License
328 stars 31 forks source link

using a local llm #35

Closed drhboss closed 4 months ago

drhboss commented 4 months ago

I'm trying to run butterfish with a local llm (ollama) and keep getting this error about not having gpt-4-turbo, is there any way to use it with llama/codellama or any other model served through ollama?

$butterfish prompt -vvv -u "http://localhost:11434/v1" "Is this thing working?" and get this error: Error: error, status code: 404, message: model 'gpt-4-turbo' not found, try pulling it first

bakks commented 4 months ago

Hey @drhboss, butterfish defaults to using an openai model and sends that model in the request. You can override this with the -m flag, for example butterfish shell -m llama-3. That model string will have to match what ollama is serving. Please give that a try!