Closed oatmealm closed 6 months ago
try to add something like that:
(setopt ellama-ollama-binary "/ssh:user@remotehost:/path/to/ollama")
Thanks!
I'v seem that llm
now considers the host if mentioned in the struct, so initial provider works out-of-the-box in ellama
, but when I select another model, using ellama-get-ollama-local-model
it ignores the host when calling make-llm-ollama
:
Would be useful if it could re-use host
and port
of the initial struct created with setopt ellama-provider
...
;; Defined in ~/.config/emacs/.local/straight/repos/ellama/ellama.el
(defun ellama-get-ollama-local-model ()
"Return llm provider for interactively selected ollama model."
(interactive)
(let ((model-name
(completing-read "Select ollama model: "
(mapcar (lambda (s)
(car (split-string s)))
(seq-drop
(process-lines ellama-ollama-binary "ls") 1)))))
(make-llm-ollama
:chat-model model-name :embedding-model model-name)))
I see. I will think how to fix it right. For now you can setopt multiple providers in ellama-providers
for later switch.
Tried both local and remote and it works perfectly! Thank you !
Ok, sorry but something is not working right... I thing that when you run (process-lines ellama-ollama-binary "ls") 1))))
without an environment variable (OLLAMA_HOST) set to the remote host, it'll only list locally. I had it set manually, so I think that's why it worked initially.... I don't think ollama's api supports listing remote server any other way? It works for me when I call `(setenv OLLAMA_HOST "x.x.x.x") manually.
@oatmealm you do it instead of changing ellama-ollama-binary
or together with it?
@oatmealm you do it instead of changing
ellama-ollama-binary
or together with it?
I don't change ellama-ollama-binary
, I simply setenv
of OLLAMA_HOST
to the remote host. Same on the cli. I don't know if that's the only way it's done but I don't see any related options to pass the command otherwise, unfortunately. So when calling a remote ollama you're not really calling a network API. It proxies your interaction with the remote instance.
I didn't know that. I use ollama locally only.
Ok - so to clarify on this. I just need to have 1) A local ollama server 2) Configure via setenv OLLAMA_HOST to point to my ollama remote server, and it should work? And that works just because the ollama client will act as a proxy to the OLLAMA_HOST server?
@oatmealm need your advice 🙂
Ok, it seems to be working.
I added the (setq ellama-provider (make-llm-ollama :host "IP HERE" :port
I'm currently have a default provider setup with a host, but it seems that when I want to select a model, it tries to load it locally...