s-kostyaev / ellama

Ellama is a tool for interacting with large language models from Emacs.
GNU General Public License v3.0
359 stars 25 forks source link

how to configure a remote host only setup? #45

Closed oatmealm closed 6 months ago

oatmealm commented 6 months ago

I'm currently have a default provider setup with a host, but it seems that when I want to select a model, it tries to load it locally...

(use-package! ellama
  :after llm
  :init
  (setopt ellama-language "English")
  (require 'llm-ollama)
  (setq ellama-provider
        (make-llm-ollama
         :host "100.102.104.16"
         :port 11434
         :chat-model "llama2"
         :embedding-model "llama2")))
s-kostyaev commented 6 months ago

try to add something like that:

(setopt ellama-ollama-binary "/ssh:user@remotehost:/path/to/ollama")
oatmealm commented 6 months ago

Thanks!

I'v seem that llm now considers the host if mentioned in the struct, so initial provider works out-of-the-box in ellama, but when I select another model, using ellama-get-ollama-local-model it ignores the host when calling make-llm-ollama :

Would be useful if it could re-use host and port of the initial struct created with setopt ellama-provider...

;; Defined in ~/.config/emacs/.local/straight/repos/ellama/ellama.el
(defun ellama-get-ollama-local-model ()
  "Return llm provider for interactively selected ollama model."
  (interactive)
  (let ((model-name
     (completing-read "Select ollama model: "
              (mapcar (lambda (s)
                    (car (split-string s)))
                  (seq-drop
                   (process-lines ellama-ollama-binary "ls") 1)))))
    (make-llm-ollama
     :chat-model model-name :embedding-model model-name)))
s-kostyaev commented 6 months ago

I see. I will think how to fix it right. For now you can setopt multiple providers in ellama-providers for later switch.

oatmealm commented 6 months ago

Tried both local and remote and it works perfectly! Thank you !

oatmealm commented 6 months ago

Ok, sorry but something is not working right... I thing that when you run (process-lines ellama-ollama-binary "ls") 1)))) without an environment variable (OLLAMA_HOST) set to the remote host, it'll only list locally. I had it set manually, so I think that's why it worked initially.... I don't think ollama's api supports listing remote server any other way? It works for me when I call `(setenv OLLAMA_HOST "x.x.x.x") manually.

s-kostyaev commented 6 months ago

@oatmealm you do it instead of changing ellama-ollama-binary or together with it?

oatmealm commented 6 months ago

@oatmealm you do it instead of changing ellama-ollama-binary or together with it?

I don't change ellama-ollama-binary, I simply setenv of OLLAMA_HOST to the remote host. Same on the cli. I don't know if that's the only way it's done but I don't see any related options to pass the command otherwise, unfortunately. So when calling a remote ollama you're not really calling a network API. It proxies your interaction with the remote instance.

s-kostyaev commented 6 months ago

I didn't know that. I use ollama locally only.

stephensrmmartin commented 4 months ago

Ok - so to clarify on this. I just need to have 1) A local ollama server 2) Configure via setenv OLLAMA_HOST to point to my ollama remote server, and it should work? And that works just because the ollama client will act as a proxy to the OLLAMA_HOST server?

s-kostyaev commented 4 months ago

@oatmealm need your advice 🙂

stephensrmmartin commented 4 months ago

Ok, it seems to be working.

I added the (setq ellama-provider (make-llm-ollama :host "IP HERE" :port :chat-model "" :embedding-model "")) part, but I don't know if that's even necessary. I used (setenv "OLLAMA_HOST" ":") in the :init section. And made sure to have a local ollama server running; the local one need not have models, as I think it really is just a proxy at that point.