David-Kunz / gen.nvim

Neovim plugin to generate text using LLMs with customizable prompts
The Unlicense
992 stars 64 forks source link

Docker #17

Closed Flamme13 closed 3 months ago

Flamme13 commented 9 months ago

How can I use this plugin with Ollama in the docker container?

Thanks.

David-Kunz commented 9 months ago

Hi @Flamme13 ,

You probably need to change

require('gen').command = 'docker exec -it ollama ollama run $model $prompt'

but I haven't tested it.

CaptainKranch commented 8 months ago

I'm having and issue while using ollama in a docker container. I have changed the model and command but it just gives me an empty buffer.

image

CaptainKranch commented 8 months ago

I was missing $model $prompt on my last response. Made the proper changes but nothing happened. Same behavior.

wishuuu commented 8 months ago

Same problem here.

I attached to docker container to see if ollama process in container is getting any request, but it isn't. My guess it stops on line 62 in file /lua/gen/init.lua. 'ollama serve > /dev/null 2>&1 &' is not valid Windows shell command, but I'm not Lua expert so it's just my guess.

xelarro commented 8 months ago

I got it working by changing the command line to: require('gen').command = 'docker exec ollama ollama run $model $prompt' And commenting the "ollama serve" line as is not needed if you are running ollama in docker, since the container does it automatically: -- pcall(io.popen, 'ollama serve > /dev/null 2>&1 &')

wishuuu commented 8 months ago

I'll try adding support for Docker containers to project, by allowing user to specify require('gen').container. If left untouched it will execute normally, else it will skip line pcall(io.popen, 'ollama serve > /dev/null 2>&1 &')

EDIT:

It will also automatically use command docker exec -it $container ollama run $model $prompt as default if container will be set by user

David-Kunz commented 8 months ago

Hi @wishuuu ,

With the recent changes of ollama, the ollama run command didn't work properly. A temporary solution invovled termopen, but that also came with many downsides.

Currently, we switched to an HTTP-based call mechanism, would you mind to try if you can restore the former behaviour by setting

M.init = <lua function to start up docker container'
M.command = 'docker exec -it some_container ollama run $model $prompt'
M.json_response = false

Not: There's no M.container option anymore, to switch the container, you would need to adjust the command.

If this works, we could also support containers more natively.

wishuuu commented 8 months ago

Hi @David-Kunz

If project is switching exclusively to using ollama via HTTP, than I think we may just delete all Docker support. User can map port inside Docker container to physical port of machine that runs it and then use standard plugin mechanisms. I'll clean up previous Docker support, test out if that solution works properly and prepare guide in README on how to setup Docker container to make it working like this.

David-Kunz commented 8 months ago

Thanks a lot, @wishuuu , and sorry for changing the whole approach. Yes, if it works to use HTTP with the docker container, than it's even better!