BuilderIO / ai-shell

A CLI that converts natural language to shell commands.
MIT License
4.05k stars 260 forks source link

Use local LLM via Ollama #123

Open Norfeldt opened 1 month ago

Norfeldt commented 1 month ago

Is it possible to use a local LLM via Ollama. If, what's the setup and what's the requirement for which LLM I can use (guessing it has to use openai api syntax)?

netcaster1 commented 1 month ago
  1. ai config a. change openai api endpoint to http://{your ollama ip:port}/v1 b. if step a is done properly, then when you select 'Model ', you can see all models in your ollama, just select your prefer one (if not working, check if OLLAMA_ORIGINS=* has been setup correctly)

  2. next will be a trick, looks still ai shell will look for 'gpt-3.5-turbo' model, then let's create one fake one in ollama to cheat ai shell for example: ollama show llama3.1:latest --modelfile > /tmp/llama3.1.modelfile ollama create gpt-3.5-turbo --file /tmp/llama3.1.modelfile ollama list

  3. test ai list files

Done.

Ajaymamtora commented 1 month ago

It starts writing out the command but then cancels itself and says this:

Request to OpenAI failed with status 404:

{ "error": { "message": "model \"gpt-3.5-turbo\" not found, try pulling it first", "type": "api_error", "param": null, "code": null } }