Open Johnmcenroyy opened 1 month ago
I tried something like this in XCode (Apple's IDE) and didn't like the result at all so I don't want to spend much time on trying to set it up. In any case, let me know what you learn and if there's some problem with the plugin, I'll try to fix it.
Hi @techee So, it took some time :) I figured out how to config Geany for lsp-ai and tested some ai models. What can I say, chat in editor with ai for me was not very convenient (I have tested in Helix), I think that is better to have separate ai chat window in Geany terminal. Completion depends of course on llm model and must be configured more accurately, but technically it works :) Maybe it will be interesting for you or somebody else.
Main configuration
# for localhost
ollama serve
or
# for localhost and remote
OLLAMA_HOST=0.0.0.0 ollama serve
ollama pull qwen2.5:0.5b
# very light and fast model for testing
# full list of models is here: https://ollama.com/library
# for me gemma2:2b and gemma2:9b were rather good
Load ollama chat for testing
ollama run qwen2.5:0.5b
# for testing write to chat: hello world in lua
# ai will print hello world function in lua
# for faster working ollama must be configured with vulkan or rocm/cuda
# ollama run is very simple chat, for more advanced chat I suggest
# https://github.com/dustinblackman/oatmeal or
# https://github.com/ggozad/oterm
LSP-AI in Geany
[Python]
cmd=lsp-ai
initialization_options={"memory": {"file_store": {}}, "models": {"model1": {"type": "ollama", "model": "qwen2.5:0.5b", "chat_endpoint": "http://127.0.0.1:11434/api/chat", "generate_endpoint": "http://127.0.0.1:11434/api/generate", "max_requests_per_second": 1}}, "completion": {"model": "model1", "parameters": {"max_context": 2000, "options": {"num_predict": 32}}}, "chat": [{"trigger": "!C", "action_display_name": "Chat", "model": "model1", "parameters": {"max_context": 4096, "max_tokens": 1024, "system": "You are a code assistant chatbot. The user will ask you for assistance coding and you will do you best to answer succinctly and accurately"}}]}
# the main challenge here was to not get tangled in brackets :)
# change 127.0.0.1 to ip address of ollama server if it is in local network
# also see https://github.com/SilasMarvin/lsp-ai/wiki/Configuration
# and https://github.com/SilasMarvin/lsp-ai/wiki/In‐Editor-Chatting
I have wanted to ask you also some questions.
In Helix for chat in editor you must send request to lsp server with textDocument/codeAction
by clicking space+a
near the trigger code !C. Here is a video how it works in Helix: https://github.com/SilasMarvin/lsp-ai?tab=readme-ov-file#in-editor-chatting. How I understand it is supported by geany-lsp plugin but there is no commands for this server (lsp-ai)
https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#textDocument_codeAction
Also what do you think about supporting textDocument/inlineCompletion
for some future ai interaction ?
https://microsoft.github.io/language-server-protocol/specifications/lsp/3.18/specification/#textDocument_inlineCompletion
https://www.tabnine.com/blog/introducing-inline-code-completions/
Thanks again for such a great project. Regards.
P.S. example of AI in Geany with oatmeal chat in terminal with ollama backend and qwen2.5:0.5b llm.
@Johnmcenroyy Thanks for testing this (I haven't tried it myself though).
initialization_options={"memory": {"file_store": {}}, "models": {"model1": {"type": "ollama", "model": "qwen2.5:0.5b", "chat_endpoint": "http://127.0.0.1:11434/api/chat", "generate_endpoint": "http://127.0.0.1:11434/api/generate", "max_requests_per_second": 1}}, "completion": {"model": "model1", "parameters": {"max_context": 2000, "options": {"num_predict": 32}}}, "chat": [{"trigger": "!C", "action_display_name": "Chat", "model": "model1", "parameters": {"max_context": 4096, "max_tokens": 1024, "system": "You are a code assistant chatbot. The user will ask you for assistance coding and you will do you best to answer succinctly and accurately"}}]}
# the main challenge here was to not get tangled in brackets :)
initialization_options
is only meant for small json configs. For anything bigger, initialization_options_file=/path/to/file.json
is more convenient as the json doesn't have to be on a single line and can be formatted as you wish. Alternatively, you can also put the config inside the config file of lsp-proxy which you can put between geany-lsp and the AI server.
I have wanted to ask you also some questions.
In Helix for chat in editor you must send request to lsp server with textDocument/codeAction by clicking space+a near the trigger code !C. Here is a video how it works in Helix: https://github.com/SilasMarvin/lsp-ai?tab=readme-ov-file#in-editor-chatting. How I understand it is supported by geany-lsp plugin but there is no commands for this server (lsp-ai) https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#textDocument_codeAction
The codeAction
request is sent when you right-click the corresponding place in the editor. Then, all the available commands should be located under the Commands submenu. In addition, you can assign keybindings to code actions based on the name that appears under the Commads submenu. For instance, if there's a menu item called Chat
, you can add
command_1_regex=Chat
and in Geany's Edit->Preferences->Keybindings you assign the keybinding you wish for this action. Then, this keybinding will always start the chat session for you.
I've just added another keybinding to directly show the code lens menu which might be more convenient than right-clicking and navigating to the Commands submenu.
Also what do you think about supporting textDocument/inlineCompletion for some future ai interaction ? https://microsoft.github.io/language-server-protocol/specifications/lsp/3.18/specification/#textDocument_inlineCompletion https://www.tabnine.com/blog/introducing-inline-code-completions/
Alright, I haven't studied the 3.18 draft specification yet. This wouldn't be such a big problem on the LSP side, the bigger problem is how to display it in the Scintilla editor as I think it doesn't support anything like this grayed-out text over which you can type and which gets "official" and colorized only after pressing some keybinding. As far as I know, when you insert something in Scintilla, you'll get it fully colorized immediately and it behaves as the rest of the code. For the same reason I don't support the textDocument/inlayHint
request in the plugin either.
the bigger problem is how to display it in the Scintilla editor as I think it doesn't support anything like this grayed-out text over which you can type and which gets "official" and colorized only after pressing some keybinding.
Possibly something like this (with some limitations) could be implemented using
By the way there's also https://github.com/TabbyML/tabby which seems to be more popular than LSP-AI.
@techee
initialization_options
is only meant for small json configs. For anything bigger,initialization_options_file=/path/to/file.json
is more convenient ...
Ah, yes, my fault, must read more carefully docs, I have always opened default conf file and docs but didn't notice that.
The
codeAction
request is sent when you right-click the corresponding place in the editor. Then, all the available commands should be located under the Commands submenu.
The problem is that there is no commands in Commands submenu. Here are the logs of Helix (it works) and Geany (no commands):
Will be grateful to you if you can look into it.
I've just added another keybinding to directly show the code lens menu which might be more convenient than right-clicking and navigating to the Commands submenu.
Super !)
This wouldn't be such a big problem on the LSP side, the bigger problem is how to display it in the Scintilla editor as I think it doesn't support anything like this grayed-out text over which you can type and which gets "official" and colorized only after pressing some keybinding.
So let's wait Scintilla support )
Possibly something like this (with some limitations) could be implemented using https://scintilla.org/ScintillaDoc.html#Annotations
Interesting, but it seems that there is no lsp server for now that supports textDocument/inlineCompletion
, so most of lsp ai use textDocument/completion
and some specific extension for vscode, vim etc. There is open issue to support it in LSP-AI https://github.com/SilasMarvin/lsp-ai/issues/5
By the way there's also https://github.com/TabbyML/tabby which seems to be more popular than LSP-AI.
Oh, really interesting project, seems that for first time they didn't had lsp server but now there is one.
https://tabby.tabbyml.com/blog/2024/02/05/create-tabby-extension-with-language-server-protocol/#why-language-server
Can't really understand it supports textDocument/inlineCompletion
now or not.
I will take a look into this.
Thanks for info and additions to plugin.
The problem is that there is no commands in Commands submenu. Here are the logs of Helix (it works) and Geany (no commands):
Thanks for the logs - those were really helpful. I believe I've fixed the problem - I've just added codeAction/resolve
support which the AI server uses. Seems to work based on my limited testing with the rust language server which uses it too. Let me know if it works now.
So let's wait Scintilla support )
Not sure if this ever happens but https://scintilla.org/ScintillaDoc.html#Annotations might work at least somewhat.
In any case, I'll probably just wait until textDocument/inlineCompletion
gets supported.
Just curious - do these AI tools provide some useful stuff for normal coding apart from those typical demo things like min/max/factorial/fibonacci numbers/quick sort/etc?
@techee
I've just added codeAction/resolve
Thank you very much, yes it works now
Just curious - do these AI tools provide some useful stuff for normal coding apart from those typical demo.
Really don't know what to say, I was curious about this also ) Let's say if completion really works (ideally) by adding whole methods etc. I am not shure that this is good idea because you can't get the full image of code in brain, but maybe it depends on psychology of certain people. I was really interested to test local AI to not depend on services and so on, so the quality really depends on llm, for me it seems the best was Gemma2 with 9b base, but it needs to run on vulkan or cuda/rocm to be really usable, only Gemma2 9b wrote fully functional calculator ) and there is Gemma2 with 27b base but I can't run it on my hardware. For now on my setup I think chat can be useful in some situations but not shure about completions, maybe with inline completions and good llm it worth it, it needs more testing with various llms and configuration. Overall this demo https://www.tabnine.com/blog/introducing-inline-code-completions/ looks nice.
There is interesting new study about AI and its help for coding https://resources.uplevelteam.com/gen-ai-for-coding https://www.cio.com/article/3540579/devs-gaining-little-if-anything-from-ai-coding-assistants.html
So for now lsp-ai works - chat and completions, but it needs inline completions (lsp-ai doesn't support it for now), what concerns tabbyml it seems much more functional than lsp-ai but for now I cannot run it through Geany, will test it more.
For now on my setup I think chat can be useful in some situations but not shure about completions, maybe with inline completions and good llm it worth it, it needs more testing with various llms and configuration.
Thanks for your insight. Based on my experience with XCode which added something like that using some cloud implementation it seemed to kind of work sometimes, I just found it extremely distracting - it forces you to constatnly switch between writing the code you want to write and reviewing the LLM code suggestions if they make sense and at least for me, this isn't the workflow I like.
So at least for now I'm not planning to spend much time in this area.
Hi @techee Found interesting LSP-AI project (open-source language server bringing Copilot power to all editors, designed to assist and empower software engineers, not replace them) - https://github.com/SilasMarvin/lsp-ai. It seems doesn't work by default, but I think it needs more configuration itself. If you have time/interest in it please take a look. Thanks.
P.S. I will try to run it and post logs here and all info that I can find.