Open pidgeon777 opened 1 month ago
Claude and Gemini also supported now in copilot. maybe do a parameter for picking from the multi model options?
https://github.blog/news-insights/product-news/bringing-developer-choice-to-copilot/
Claude and Gemini also supported now in copilot. maybe do a parameter for picking from the multi model options?
https://github.blog/news-insights/product-news/bringing-developer-choice-to-copilot/
Huge news here, for us Copilot subscribers. I truly hope support will be added in the plugin for all those models, with Copilot as provider.
The CopilotChat.nvim plugin has recently expanded its capabilities by adding support for several new AI models: o1-preview
, o1-mini
, and claude-3.5-sonnet
.
Having recently experimented with the claude-3.5-sonnet
model for the first time, I can now understand why it has garnered such widespread appreciation. Its ability to adhere to system prompts demonstrates significantly higher accuracy compared to the gpt-4o
model that I had been utilizing previously.
Microsoft's strategic decision to integrate these diverse AI models under the unified Github Copilot platform appears to be a well-calculated move. This integration offers subscribers seamless access to multiple state-of-the-art AI models through a single subscription service.
Looking forward, there's hope that avante.nvim
will be updated to accommodate these new developments in the ecosystem.
Maybe change the issue title to note that there are multiple models now available? maybe it'll draw more attention
Done.
Are these optional models controlled by copilot.lua?
@msdone-lwt i dont see a way to change model from copilot.lua for some reason
If you're referring to the model used by Copilot for code completion, it was originally based on OpenAI Codex:
OpenAI Codex Official Documentation
The Code Completion functionality received a major update in July 2023:
GitHub Blog Post about the Update
Since then, it has been powered by a new model developed through a collaboration between OpenAI, Microsoft Azure AI, and GitHub. This new model offers a 13% latency improvement compared to its predecessor.
While looking at the editor integrations, it's worth noting that the copilot.vim
README still mentions that "GitHub Copilot uses OpenAI Codex to suggest code and entire functions in real-time right from your editor." However, it appears that users cannot manually change or select the underlying completion model used by editor plugins like copilot.vim
(and consequently copilot.lua
). The model selection and updates are managed entirely on GitHub's backend infrastructure.
For comprehensive information about GitHub Copilot, including features, pricing, and documentation, visit the official GitHub Copilot page:
Anyway, The specific name of this new model, as well as the current model being used for Code Completion functionality, seems to remains undisclosed.
I've always been curious about this information. If anyone has more detailed insights about the current model, I would greatly appreciate if you could share them. Thank you!
I have successfully enabled Claude 3.5 Sonnet support with the following configuration:
local my_opts = {
provider = "copilot",
copilot = {
model = "claude-3.5-sonnet",
-- max_tokens = 4096,
},
}
There are some considerations regarding the optimal value for max_tokens
that need to be addressed. The model specifications table shows "max context" and "max output" values for each model, but their relationship to the max_tokens
parameter is not yet clear:
Model Family | Model Name | Type | Max Context | Max Output | Tokenizer | Features |
---|---|---|---|---|---|---|
gpt-4-turbo | GPT 4 Turbo | chat | 128000 | 4096 | cl100k_base | tool_calls, parallel_tool_calls |
o1-mini | o1-mini (Preview) | chat | 128000 | - | o200k_base | - |
o1-mini | o1-mini (Preview) | chat | 128000 | - | o200k_base | - |
gpt-4 | GPT 4 | chat | 32768 | 4096 | cl100k_base | tool_calls |
gpt-4 | GPT 4 | chat | 32768 | 4096 | cl100k_base | tool_calls |
text-embedding-3-small | Embedding V3 small | embeddings | - | - | cl100k_base | dimensions |
text-embedding-3-small | Embedding V3 small (Inference) | embeddings | - | - | cl100k_base | dimensions |
claude-3.5-sonnet | Claude 3.5 Sonnet (Preview) | chat | 200000 | 4096 | o200k_base | - |
gpt-3.5-turbo | GPT 3.5 Turbo | chat | 16384 | 4096 | cl100k_base | tool_calls |
gpt-3.5-turbo | GPT 3.5 Turbo | chat | 16384 | 4096 | cl100k_base | tool_calls |
gpt-4o | GPT 4o | chat | 128000 | 4096 | o200k_base | tool_calls, parallel_tool_calls |
gpt-4o | GPT 4o | chat | 128000 | 4096 | o200k_base | tool_calls, parallel_tool_calls |
gpt-4o | GPT 4o | chat | 128000 | 4096 | o200k_base | tool_calls, parallel_tool_calls |
gpt-4o | GPT 4o | chat | 128000 | 16384 | o200k_base | tool_calls, parallel_tool_calls |
gpt-4o-mini | GPT 4o Mini | chat | 128000 | 4096 | o200k_base | tool_calls, parallel_tool_calls |
gpt-4o-mini | GPT 4o Mini | chat | 128000 | 4096 | o200k_base | tool_calls, parallel_tool_calls |
text-embedding-ada-002 | Embedding V2 Ada | embeddings | - | - | cl100k_base | - |
o1 | o1-preview (Preview) | chat | 128000 | - | o200k_base | - |
o1 | o1-preview (Preview) | chat | 128000 | - | o200k_base | - |
If anyone has insights on how these limits correlate, please share your understanding.
Additionally, the following tasks remain:
o1-preview
and o1-mini
models in avante.nvim
model = "claude-3.5-sonnet"
is sufficient or if additional adjustments are needed (e.g., in the system prompt)cool
What is the reason for this? Is it because of my network?
I never had the issue you mentioned. In my case it is working great:
@msdone-lwt do you happen to live in Eastern Asia, most likely in:
?
I am in China, if I turn on the network proxy, it doesn't respond at all. If I turn off the code, it returns an error: model access is not permitted per policy settings😥
I was curious because just for testing I played around with the new Claude 3.5 Sonnet integration with avante.nvim
:
It seems to be working pretty well! 🙂
Joking apart, try this:
https://github.com/CopilotC-Nvim/CopilotChat.nvim
Try to perform a couple of requests to the Claude 3.5 Sonnet model.
Switch back to avante.nvim
using the config I posted before:
local my_opts = {
provider = "copilot",
copilot = {
model = "claude-3.5-sonnet",
-- max_tokens = 4096,
},
}
model access is not permitted per policy settings
not an issue of avante, but of github rollout. i'm getting the same.
When I did my first test, using https://github.com/CopilotC-Nvim/CopilotChat.nvim, I also received that error (I never had it with avante.nvim
). Probably, few minutes later I got the permission granted.
These are the models I can currently use with my GitHub Copilot subscription:
model access is not permitted per policy settings
not an issue of avante, but of github rollout. i'm getting the same.
There is no option in my settings to start Claude
model access is not permitted per policy settings
not an issue of avante, but of github rollout. i'm getting the same. Claude 3.5 Sonnet Announcement and Rollout
There is no option in my settings to start Claude
There is no Claude option in the Copilot plugin in my VSCode either 😂 WTF
@pidgeon777 Okay, I will try it tomorrow,
you have to wait for it to roll out to your account
On Thu, Oct 31, 2024 at 12:35 PM msdone @.***> wrote:
model access is not permitted per policy settings
not an issue of avante, but of github rollout. i'm getting the same. Claude 3.5 Sonnet Announcement and Rollout https://docs.github.com/en/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot#claude-35-sonnet-announcement-and-rollout
There is no Claude option in the Copilot plugin in my VSCode either 😂 WTF image.png (view on web) https://github.com/user-attachments/assets/6588afac-84ae-425e-99f2-3bc93cd2bb5d
— Reply to this email directly, view it on GitHub https://github.com/yetone/avante.nvim/issues/733#issuecomment-2450188243, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK6S7SVKH6BZXCAKME6HPOLZ6JE4VAVCNFSM6AAAAABQF5R4L6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJQGE4DQMRUGM . You are receiving this because you commented.Message ID: <yetone/avante. @.***>
When I did my first test, using https://github.com/CopilotC-Nvim/CopilotChat.nvim, I also received that error (I never had it with
avante.nvim
). Probably, few minutes later I got the permission granted.These are the models I can currently use with my GitHub Copilot subscription:
@pidgeon777 How does your "model selector" work?
you have to wait for it to roll out to your account … On Thu, Oct 31, 2024 at 12:35 PM msdone @.> wrote: model access is not permitted per policy settings not an issue of avante, but of github rollout. i'm getting the same. Claude 3.5 Sonnet Announcement and Rollout https://docs.github.com/en/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot#claude-35-sonnet-announcement-and-rollout There is no option in my settings to start Claude [image: image] https://private-user-images.githubusercontent.com/103359349/382002553-a9c7081f-d859-47a6-abe1-8a9cd6fbc370.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzAzODkwODAsIm5iZiI6MTczMDM4ODc4MCwicGF0aCI6Ii8xMDMzNTkzNDkvMzgyMDAyNTUzLWE5YzcwODFmLWQ4NTktNDdhNi1hYmUxLThhOWNkNmZiYzM3MC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQxMDMxJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MTAzMVQxNTMzMDBaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1jOWM5MjA0Yjk3ZDc3Zjc5OTM5MWYyZTk4MDEzYjQ2NmE0NTQ2MzNlODUzYmFhMTY2ZTk0YTExNDg1YmNlNzM2JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.m1S0lQA2ExUJwQX9nb9AYM_8rgd2X-wTzDX_LL4P9F0 There is no Claude option in the Copilot plugin in my VSCode either 😂 WTF image.png (view on web) https://github.com/user-attachments/assets/6588afac-84ae-425e-99f2-3bc93cd2bb5d — Reply to this email directly, view it on GitHub <#733 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK6S7SVKH6BZXCAKME6HPOLZ6JE4VAVCNFSM6AAAAABQF5R4L6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJQGE4DQMRUGM . You are receiving this because you commented.Message ID: <yetone/avante. @.>
I see, but I'm curious why you are in the promotion plan. Is it random?
I'm not. I'm also waiting same as you. It's just probably rolling out slowly
On Thu, 31 Oct 2024, 12:43 msdone, @.***> wrote:
you have to wait for it to roll out to your account … <#m-2999303661844266386> On Thu, Oct 31, 2024 at 12:35 PM msdone @.> wrote: model access is not permitted per policy settings not an issue of avante, but of github rollout. i'm getting the same. Claude 3.5 Sonnet Announcement and Rollout https://docs.github.com/en/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot#claude-35-sonnet-announcement-and-rollout https://docs.github.com/en/copilot/using-github-copilot/using-claude-sonnet-in-github-copilot#claude-35-sonnet-announcement-and-rollout There is no option in my settings to start Claude [image: image] https://private-user-images.githubusercontent.com/103359349/382002553-a9c7081f-d859-47a6-abe1-8a9cd6fbc370.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzAzODkwODAsIm5iZiI6MTczMDM4ODc4MCwicGF0aCI6Ii8xMDMzNTkzNDkvMzgyMDAyNTUzLWE5YzcwODFmLWQ4NTktNDdhNi1hYmUxLThhOWNkNmZiYzM3MC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQxMDMxJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MTAzMVQxNTMzMDBaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1jOWM5MjA0Yjk3ZDc3Zjc5OTM5MWYyZTk4MDEzYjQ2NmE0NTQ2MzNlODUzYmFhMTY2ZTk0YTExNDg1YmNlNzM2JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.m1S0lQA2ExUJwQX9nb9AYM_8rgd2X-wTzDX_LL4P9F0 https://private-user-images.githubusercontent.com/103359349/382002553-a9c7081f-d859-47a6-abe1-8a9cd6fbc370.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzAzODkwODAsIm5iZiI6MTczMDM4ODc4MCwicGF0aCI6Ii8xMDMzNTkzNDkvMzgyMDAyNTUzLWE5YzcwODFmLWQ4NTktNDdhNi1hYmUxLThhOWNkNmZiYzM3MC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQxMDMxJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MTAzMVQxNTMzMDBaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1jOWM5MjA0Yjk3ZDc3Zjc5OTM5MWYyZTk4MDEzYjQ2NmE0NTQ2MzNlODUzYmFhMTY2ZTk0YTExNDg1YmNlNzM2JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.m1S0lQA2ExUJwQX9nb9AYM_8rgd2X-wTzDX_LL4P9F0 There is no Claude option in the Copilot plugin in my VSCode either 😂 WTF image.png (view on web) https://github.com/user-attachments/assets/6588afac-84ae-425e-99f2-3bc93cd2bb5d https://github.com/user-attachments/assets/6588afac-84ae-425e-99f2-3bc93cd2bb5d — Reply to this email directly, view it on GitHub <#733 (comment) https://github.com/yetone/avante.nvim/issues/733#issuecomment-2450188243>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK6S7SVKH6BZXCAKME6HPOLZ6JE4VAVCNFSM6AAAAABQF5R4L6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJQGE4DQMRUGM https://github.com/notifications/unsubscribe-auth/AK6S7SVKH6BZXCAKME6HPOLZ6JE4VAVCNFSM6AAAAABQF5R4L6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJQGE4DQMRUGM . You are receiving this because you commented.Message ID: <yetone/avante. @.>
I see, but I'm curious why you are in the promotion plan. Is it random?
— Reply to this email directly, view it on GitHub https://github.com/yetone/avante.nvim/issues/733#issuecomment-2450206129, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK6S7SUIWPEHYEKMF5GJKO3Z6JF2BAVCNFSM6AAAAABQF5R4L6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINJQGIYDMMJSHE . You are receiving this because you commented.Message ID: <yetone/avante. @.***>
When I did my first test, using https://github.com/CopilotC-Nvim/CopilotChat.nvim, I also received that error (I never had it with
avante.nvim
). Probably, few minutes later I got the permission granted. These are the models I can currently use with my GitHub Copilot subscription:@pidgeon777 How does your "model selector" work?
It's a very cool feature of https://github.com/CopilotC-Nvim/CopilotChat.nvim:
https://github.com/CopilotC-Nvim/CopilotChat.nvim?tab=readme-ov-file#commands
@repparw Just now, I saw that I could use claude-3.5-sonnet.
Thanks, with this work-around it is enabled now.
@msdone-lwt and @gam-phon, was it enabled after following the method I shared here?:
https://github.com/yetone/avante.nvim/issues/733#issuecomment-2449795633
@repparw did you also try? Did it work for you?
I have already enabled Anthropic Claude 3.5 Sonnet in Copilot in my GitHub account, but I still don't see Claude when selecting a model in VSCode. Additionally, when I set the Copilot model to claude-3.5-sonnet in avante.nvim, it returns: Error: 'API request failed with status 403. Body: "access denied"'
CopilotChat.nvim
@msdone-lwt and @gam-phon, was it enabled after following the method I shared here?:
@repparw did you also try? Did it work for you?
Yes, after following your method exactly, it was enabled immediately. Before I did not have access to Claude 3.5 Sonnet
@repparw did you also try? Did it work for you?
policy still hasn't rolled out for me, sadly
rolled out now and working. sidenote, anyone figured out what to do with max_tokens for claude?
Thanks, I can use Claude now too
My idea originated from an interesting discovery I did: initially, the model availability issue was also present in the CopilotChat.nvim plugin (repository link). A subsequent commit addressed this, likely modifying the model activation method, which enabled the use of the Claude Sonnet 3.5 model.
These enhancements haven't been implemented in avante.nvim yet. There's a need for support of o1-preview
and o1-mini
models, as well as future compatibility with Gemini 1.5 Pro once it becomes available.
Currently, our workflow involves using CopilotChat.nvim as a preliminary "unlocking" authentication step, enabling us to utilize all features in avante.nvim afterward.
For GitHub Copilot subscribers, I strongly recommend using both plugins in tandem. Based on my testing:
CopilotChat.nvim:
avante.nvim:
The hope is for avante.nvim to evolve by incorporating workspace parsing functionality, which would provide better context awareness when proposing code modifications, and also to support the remaining models offered in the GitHub Copilot subscription. Finally, a model selector would be great, when performing requests.
@repparw I'm also interested in knowing more about max_tokens
and similar parameters. Which would be the optimal values, for example?
Currently, our workflow involves using CopilotChat.nvim as a preliminary "unlocking" authentication step, enabling us to utilize all features in avante.nvim afterward.
not true. rollout just was not for all users at the same time. I didn't install CopilotChat. Just waited until the policy appeared and enabled it
Then you're suggesting it could be a coincidence that after following the method, they were able to use avante.nvim
with Claude 3.5 Sonnet
? As far as I know, it could even be the case.
Yes. CopilotChat.nvim has nothing to do with it
Thanks, I can use Claude now too I am also in China, using the Claude model is much slower than the gpt model
Feature request
OpenAI o1-preview and o1-mini are now available in GitHub Copilot Chat in VS Code and in the GitHub Models playground.
https://github.blog/news-insights/product-news/try-out-openai-o1-in-github-copilot-and-models/
All the necessary information on how this could be ported to
avante.nvim
can be found at the following link:https://github.com/CopilotC-Nvim/CopilotChat.nvim/issues/419
Motivation
To utilize the OpenAI
o1
model, it is not mandatory to rely on the costly API services. Instead, a subscription to GitHub Copilot Chat can suffice for accessing the model's capabilities. This alternative provides a more cost-effective solution while still leveraging the advanced functionalities of theo1
model. By subscribing to GitHub Copilot Chat, users can integrate AI-driven assistance directly into their development workflow, enhancing productivity and code quality without incurring significant expenses.Other
No response