darrenburns / elia

A snappy, keyboard-centric terminal user interface for interacting with large language models. Chat with ChatGPT, Claude, Llama 3, Phi 3, Mistral, Gemma and more.
Apache License 2.0
1.86k stars 115 forks source link

api_base not working #57

Open smjure opened 5 months ago

smjure commented 5 months ago

Hey there! Thanks for the awesome app!!

I have a problem with defining api_base, namely I have ollama3 model working well on different port, e.g. 12345 and not the default one 11434, so in this regard I modified/defined the ~/.config/elia/config.toml file as follows:

default_model = "gpt-4o"
system_prompt = "You are a helpful assistant who talks like a pirate."
message_code_theme = "dracula"

[[models]]
name = "ollama/llama3"
api_base = "http://localhost:12345" # I need this working coz will change it to remote server

i.e. I simply added: api_base = "http://localhost:12345" and with it, the elia does not work when selected (ctrl+o). If I remove this line, it works fine with my defined llama3 model. Could this be fixed to work on a custom api_base? FYI I installed/clone the latest elia version.

darrenburns commented 5 months ago

When you say it doesn't work, what happens?

smjure commented 5 months ago

Sorry. Here it goes: image

tanc commented 1 month ago

To get this working using llama3.1 in ollama with a custom api_base you can use the openai method as Ollama server has an Open AI compatible API:

default_model = "openai/llama3.1"

[[models]]
name = "openai/llama3.1"
api_base = "http://192.168.1.145:11434/v1"
api_key = "test_or_anything_should_be_fine"

Change the api_base to wherever your ollama server is and make sure it ends in /v1. The api_key can't be empty but can be anything.

Using the documented ollama method didn't work for me with a custom api_base