helix-editor / helix

A post-modern modal text editor.
https://helix-editor.com
Mozilla Public License 2.0
31.14k stars 2.3k forks source link

Add Copilot Plugin #1927

Open 7flash opened 2 years ago

nick887 commented 1 year ago

is there any discussions? i think it is a really useful feature if we can use github copilot in helix and i will choose switch from goland to helix to get my work done

sudormrfbin commented 1 year ago

There are no plans to have copilot in the editor core, so this will have to wait till there is proper plugin support.

luccahuguet commented 1 year ago

thanks for the info Sudormrfbin.

If something changes please let us know, it would be a very useful feature!

In the meanwhile, it is possible to run copilot (Edit: OpenAI's Codex, that powers Copilot) in the terminal, with bash or zsh

not as good but could help

roehst commented 1 year ago

Waiting on this feature to move from neovim to helix for good

lukepighetti commented 1 year ago

Has anyone tried integrating this copilot LSP? https://github.com/TerminalFi/LSP-copilot/blob/master/language-server/package.json#L4

yudjinn commented 1 year ago

isnt github getting sued for how copilot takes data? I def dont think copilot should be anywhere near core

gaetschwartz commented 1 year ago

Is there any update on this ?

luccahuguet commented 1 year ago

Is there any update on this ?

Hi, this needs a plugin system in the first place, which is not the case

The plugin system is currently being prototyped, and might take a long while before it is done. The current prototype might even be scrapped if it turns out the tech is not as proper as other options

A plugin system is quite a big endeavor so don't wait on that

That said, the maintainers are quite good and productive so it will get done, at some point

PS: I also miss this feature a lot and been using vscode when i have to use it

lukepighetti commented 1 year ago

This seems like something we can solve with LSP integration instead of the plugin system. I believe that's how folks are using it in Sublime https://forum.sublimetext.com/t/github-copilot-for-sublime-text-4-is-coming/64449/3

kanwarujjaval commented 1 year ago

Was anyone able to use the copilot LSP successfully? It would be really useful specially when #2507 is merged.

leeola commented 1 year ago

With Copilot X's recent announcement i had to look this up for Helix, landing me here. The LSP idea sounds alright, but it would be nice to craft a deeper UX around Copilot.

Are plugin-like APIs available in Helix to, perhaps, temporarily fork Helix and integrate a Copilot Plugin directly into the forked Helix binary? Ie write a plugin directly into fork of Helix proper, even though we only intend to migrate it to a Plugin asap?

If Copilot X is useful (big if, heh) it would be nice to have a good experience in Helix for that. They have a NeoVim plugin, for comparison.

patrick-kidger commented 1 year ago

Also ended up here after the Copilot X announcement, FWIW. No idea if implementing it as just an LSP is technically feasible, but that would certainly be nice if so.

mikkelfj commented 1 year ago

The way things are going with AI these days, some level of non-trivial support would eventually be necessary, but some aspects like documentation could also belong to the build system.

Either way, I suspect we will see competitors, notably also fully open source, to Copilot-X in the coming years (or days, maybe). Also the feature set will evolve drastically.

For these reasons I think both some sense of urgency and restraint is necessary at the same time.

7flash commented 1 year ago

A temporary solution I have for myself is a chatgpt window running a "bridge" script in devtools. It allows chatgpt to communicate with other apps through a locally running database/job queue. If it makes sense, I can try to make it into a plugin.

ptman commented 1 year ago

Not just copilot, there are now several similar services. Copilot, tab9, codeium, codegeex, (vim-ai just uses chatgpt), ...

7flash commented 1 year ago

A temporary solution I have for myself is a chatgpt window running a "bridge" script in devtools. It allows chatgpt to communicate with other apps through a locally running database/job queue. If it makes sense, I can try to make it into a plugin.

Relevant: https://github.com/kazuki-sf/YouTube_Summary_with_ChatGPT/issues/9

ptman commented 1 year ago

And also amazon code whisperer. I suggest title is rewritten to reflect the variety of alternatives

Neugierdsnase commented 1 year ago

What can we do to make this happen as soon as possible? I want to switch over completely, but the reality is, that I just don't want miss AI tools.

I'm willing to invest time in this.

7flash commented 1 year ago

What can we do to make this happen as soon as possible? I want to switch over completely, but the reality is, that I just don't want miss AI tools.

I'm willing to invest time in this.

Consider running a browser extension which establishes communication between chatgpt and helix, here is minimal example I made in between two instances of chatgpt, but you can imagine to replace second instance with helix editor: https://github.com/7flash/AutoChatGPT

rnarenpujari commented 1 year ago

Related discussion thread: https://github.com/helix-editor/helix/discussions/4037

kirawi commented 1 year ago

I noticed that the related fork was not linked: https://github.com/helix-editor/helix/pull/6865 AFAIK it works and some people are using it, but we have no plans to merge it into Helix for reasons discussed there

Neugierdsnase commented 12 months ago

I noticed that the related fork was not linked: #6865 AFAIK it works and some people are using it, but we have no plans to merge it into Helix for reasons discussed there

Thanks for linking!

0x61nas commented 7 months ago

isnt github getting sued for how copilot takes data? I def dont think copilot should be anywhere near core

+ not everyone wanna be distracted by its stupid suggestions. We should wait for the plugin system to be ready, and am pretty sure someone will write a plugin for this.

hemedani commented 6 months ago

how about codeium?

leona commented 5 months ago

In-case anybody is still looking for this https://github.com/helix-editor/helix/discussions/4037#discussioncomment-8227690

tirithen commented 5 months ago

As suggested here https://github.com/helix-editor/helix/discussions/9369 I believe that adding Ollama LSP support would be a great possibility. Although requiring a GPU with enough memory for he heavier models they all run locally on your machine. This might be considered a much safer setup.

Having gpt text completion would be a very welcome addition to helix, and having it work with open locally run models even better.

iocron commented 5 months ago

As suggested here #9369 I believe that adding Ollama LSP support would be a great possibility. Although requiring a GPU with enough memory for he heavier models they all run locally on your machine. This might be considered a much safer setup.

Having gpt text completion would be a very welcome addition to helix, and having it work with open locally run models even better.

I agree except the heavier models. We don't necessarily need them, because there are a couple great and efficient ollama code models out there without using a big gpu :) https://ollama.ai/library?q=code

mikkelfj commented 5 months ago

ollama run mistral works fine on mac studio M2 Max 32GB RAM, its a 7B model.

mikkelfj commented 5 months ago

However, it only really gets interesting when the models can understand content referenced in compile_commands.json

tirithen commented 5 months ago

Regardless the model size, the great thing with ollama is that projects like that makes it possible to easily pull the model of your choice, that fits your need and system capabilities while ensuring privacy.

I wonder if there is already good LSPs out there for ollama that could be hooked into helix? Has anyone considered the llm-ls project? Just found it now, but it is supposed to work as an LSP and it can bridge over to LLM runners ollama being one of them.

There has already been some interest in figuring out the LSP integration from @hemedani and @webdev23 on the issue https://github.com/huggingface/llm-ls/issues/49 . Maybe there is someone from here that can help them out with some directions to get going?

leona commented 5 months ago

Regardless the model size, the great thing with ollama is that projects like that makes it possible to easily pull the model of your choice, that fits your need and system capabilities while ensuring privacy.

I wonder if there is already good LSPs out there for ollama that could be hooked into helix? Has anyone considered the llm-ls project? Just found it now, but it is supposed to work as an LSP and it can bridge over to LLM runners ollama being one of them.

There has already been some interest in figuring out the LSP integration from @hemedani and @webdev23 on the issue huggingface/llm-ls#49 . Maybe there is someone from here that can help them out with some directions to get going?

It's possible to overwrite the openai endpoint of my language server here but if it doesn't follow the openai pattern it won't work, and I don't think ollama does. If people wanted support for ollama it would be fairly easy to add, but any time I've tried locally hosted models they've been pretty poor for this use case.

tirithen commented 5 months ago

@leona thanks for sharing! Your project could be a nice template for running against ollama. When I tested Mistral 7B separately for code assistance in Rust it worked pretty well for me, at least better than without, but I would probably want to run the larger models for even better responses.

I'm personally more into these open models mainly for privacy reasons, but anyhow your project could also be useful as is for the ones that that started the issue and where interested in having copilot running (sorry for hijacking the thread for ollama things by the way).

Really nice with a running LSP plugin example, now I which I just had more time to try this out with ollama. If anyone gets started I'll definitely try to find some time to help out (but preferably in Rust in that case).

mikkelfj commented 5 months ago

I'm not quite ready for relying on LLMs for coding: This is mistral 7B

"An octahedron consists of eight vertices and eight triangular faces, with each face being made up of three vertices. Therefore, there are indeed a total of 8 x 3 = 24 individual vertices, but since a triangle is defined by three non-unique vertices, there are only 12 unique triangles in an octahedron."

7flash commented 4 months ago

Hi! I would like to share here my current workflow I found myself comfortable with.

https://github.com/7flash/helix-chatgpt

Notice, it works especially well with Warp Terminal, where I created a workflow/button to execute the script, but as well you can define bash function etc.

What happens when you run the script, it opens a new file with helix where you can write your prompt, and I found it being more comofrtable than any existing UI.

Just sharing here what works for me, but there isn't good documentation yet, so please feel free to contribute.

kyfanc commented 3 months ago

Hi everyone!

@leona thanks for your effort in creating helix-gpt. I made an attempt to integrate with ollama. @tirithen may be you would be interested in having a try with it?

Regarding llm-ls, I believe they are implementing the features as LSP custom methods, which helix built-in LSP client do not support currently.

tirithen commented 3 months ago

@kyfanc I'm off for a trip for a while now without my GPU, but I'll give it a try once I'm back.

Ideally in the end I think this sort of application should be a simple binary rather than a bun + typescript, to be fast and efficient, written in Rust or similar. But I also saw that it at least can build to a binary via bun (I suppose it means bundling and running a full V8 to run the lsp).

Nice that there is some progress either way! :-) And also nice with the approach to support both cloud gpt providers and ollama so the users can choose what fits them best. :-)

RayyanNafees commented 1 month ago

(I suppose it means bundling and running a full V8 to run the lsp).

@tirithen bun doesnt use v8 actually, it's JavaScript-core which has smaller size

llamaha commented 1 week ago

So I really wanted to see this feature (albeit for Claude) but I'm actually seeing good results using aichat: https://github.com/sigoden/aichat

Using this with the "insert-output" and "append-output" commands in Helix the output will write into Helix:

:insert-output aichat -r %code% Write a Rust function that will parse an i32 integer and return a result

Can run it outside of Helix just as easily:

aichat -f src/main.rs what does this code do?

Using the shell_pipe feature you can also run aichat against highlighted text

:pipe aichat -r %code% please write unit tests for this highlighted function 

Or this is also very easy using the command line by specifying the function:

aichat -f src/main.rs -r %code% please write unit tests for the function run

Note that I've been verbose on purpose for demonstration purposes. The above commands can be reduced in length and there are hotkeys for the shell commands.

mikkelfj commented 1 week ago

The benefit is when AIs get larger contexts and can understand compile_commands.json or compile_flags.txt

And then you can just say, add test cases for the functions below, and add a bash script to drive it. Also write a bat file for windows like the other scripts.

https://clang.llvm.org/docs/JSONCompilationDatabase.html

rodrigopim commented 5 days ago

So I really wanted to see this feature (albeit for Claude) but I'm actually seeing good results using aichat: https://github.com/sigoden/aichat

Using this with the "insert-output" and "append-output" commands in Helix the output will write into Helix:

:insert-output aichat -r %code% Write a Rust function that will parse an i32 integer and return a result

Can run it outside of Helix just as easily:

aichat -f src/main.rs what does this code do?

Using the shell_pipe feature you can also run aichat against highlighted text

:pipe aichat -r %code% please write unit tests for this highlighted function 

Or this is also very easy using the command line by specifying the function:

aichat -f src/main.rs -r %code% please write unit tests for the function run

Note that I've been verbose on purpose for demonstration purposes. The above commands can be reduced in length and there are hotkeys for the shell commands.

Super cool tip!