Open Norfeldt opened 3 months ago
ai config a. change openai api endpoint to http://{your ollama ip:port}/v1 b. if step a is done properly, then when you select 'Model ', you can see all models in your ollama, just select your prefer one (if not working, check if OLLAMA_ORIGINS=* has been setup correctly)
next will be a trick, looks still ai shell will look for 'gpt-3.5-turbo' model, then let's create one fake one in ollama to cheat ai shell for example: ollama show llama3.1:latest --modelfile > /tmp/llama3.1.modelfile ollama create gpt-3.5-turbo --file /tmp/llama3.1.modelfile ollama list
test ai list files
Done.
It starts writing out the command but then cancels itself and says this:
Request to OpenAI failed with status 404:
{ "error": { "message": "model \"gpt-3.5-turbo\" not found, try pulling it first", "type": "api_error", "param": null, "code": null } }
@Ajaymamtora This was fixed in #115 but a new version is not released yet. @steve8708 could you help us with that?
published!
Is it possible to use a local LLM via Ollama. If, what's the setup and what's the requirement for which LLM I can use (guessing it has to use openai api syntax)?