micr0-dev / lexido

A terminal assistant, powered by Generative AI
GNU Affero General Public License v3.0
218 stars 8 forks source link

Feature Request: Remote Host Configuration for Ollama Integration in Lexido #49

Closed arm-ser closed 4 months ago

arm-ser commented 4 months ago

Is your feature request related to a problem? Please describe. If I understand correctly, the current setup for Lexido and Ollama requires running both applications on the same server, however, I have minipcs as servers on my network , and a gaming machine with powerfull graphics card. When trying to run ollama on minipc servers, I encounter very slow performance because of hardware. But the gaming machine runs llms just fine.

Describe the solution you'd like I would like to request a new feature that allows users to set a remote host where Ollama is running, instead of having to run it on the same server as Lexido. This for example can be implemented through a new tag similar to "--setModel", such as "--setOllama (to set Ollama's IP and port).

Describe alternatives you've considered I tried relatively small and simple models (such as qwen:0.5b) but even those models are slow and also not capable of doing the required task.

Additional context I think that providing a way to set a remote host for Ollama would greatly improve Lexidos performance and since ollama can be run as a docker container, will help to setup more secure network.

micr0-dev commented 4 months ago

Interesting, I will look into it, it is probably something that I may be able to do if ollama allows for something like this. I will keep you posted

On Wed, Apr 10, 2024, 15:49 Observant4678 @.***> wrote:

Is your feature request related to a problem? Please describe. If I understand correctly, the current setup for Lexido and Ollama requires running both applications on the same server, however, I have minipcs as servers on my network , and a gaming machine with powerfull graphics card. When trying to run ollama on minipc servers, I encounter very slow performance because of hardware. But the gaming machine runs llms just fine.

Describe the solution you'd like I would like to request a new feature that allows users to set a remote host where Ollama is running, instead of having to run it on the same server as Lexido. This for example can be implemented through a new tag similar to "--setModel", such as "--setOllama (to set Ollama's IP and port).

Describe alternatives you've considered I tried relatively small and simple models (such as qwen:0.5b) but even those models are slow and also not capable of doing the required task.

Additional context I think that providing a way to set a remote host for Ollama would greatly improve Lexidos performance and since ollama can be run as a docker container, will help to setup more secure network.

— Reply to this email directly, view it on GitHub https://github.com/micr0-dev/lexido/issues/49, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGJEUKVDQJZ2VFUWSC35UNLY4WJS7AVCNFSM6AAAAABGBBJ4NGVHI2DSMVQWIX3LMV43ASLTON2WKOZSGIZTMMZRGU4TANI . You are receiving this because you are subscribed to this thread.Message ID: @.***>

micr0-dev commented 4 months ago

I want to update you that I am working on this and that soon lexido will support all REST API LLMs. Including remote local LLMs or even ChatGPT, Claude, and more!

arm-ser commented 4 months ago

Amazing. Cannot wait to test it out. Thank you!!

micr0-dev commented 4 months ago

Just posted version 1.4! Be sure to read the README and the section on remote LLMs If you have any questions feel free to ask, good luck!

micr0-dev commented 4 months ago

Re-open this if there are issues