microsoft / vscode-copilot-release

Feedback on GitHub Copilot Chat UX in Visual Studio Code.
https://marketplace.visualstudio.com/items?itemName=GitHub.copilot-chat
Creative Commons Attribution 4.0 International
315 stars 28 forks source link

GPT-4 Turbo In Copilot Chat #560

Closed BeamoINT closed 2 months ago

BeamoINT commented 10 months ago

Copilot chat just got GPT-4 inside of it which is a great thing that I love. It does use GPT-3.5 for some things though when it really shouldn't. It makes the code worse. Copilot should just use GPT-4 Turbo since now it is a smarter model than GPT-4 that is faster and cheaper so there is no need to switch between different models like Codex, GPT-3.5, and GPT-4. It should just use GPT-4 Turbo all the time for everything.

MarvinButh commented 10 months ago

Definitely a must-have feature for copilot!

eloyparedesq commented 10 months ago

I hope they make this update as soon as possible

mreduar commented 10 months ago

It would be a great update.

iwangbowen commented 10 months ago

Definitely

BeamoINT commented 10 months ago

It would be a great update indeed

kurutah commented 10 months ago

It does use GPT-3.5 for some things though when it really shouldn't. - where? And regarding GPT4-turbo, many people think it's worse for coding than GPT-4, so I don't know...

outgaze commented 10 months ago

It does use GPT-3.5 for some things though when it really shouldn't. - where? And regarding GPT4-turbo, many people think it's worse for coding than GPT-4, so I don't know...

It's not and for the code suggestions it does use gpt 3.5 but for the text and reasoning It's using gpt 4 so the code it outputs is still pure garbo as it doesn't use gpt4's instructions fine tuning and doesn't have the same memory span also takes up to 16k tokens context that's why it currently only feeds specific parts of your code base and not the entire code base It's just ass overall ngl they should really use gpt 4 turbo as it has more context intake and they won't need to do their retarded pick and assume embedding hallucinations generations algorithm and turbo is also up to date with most libraries since 2023 knowledge 😉

BeamoINT commented 10 months ago

It does use GPT-3.5 for some things though when it really shouldn't. - where? And regarding GPT4-turbo, many people think it's worse for coding than GPT-4, so I don't know...

It's not and for the code suggestions it does use gpt 3.5 but for the text and reasoning It's using gpt 4 so the code it outputs is still pure garbo as it doesn't use gpt4's instructions fine tuning and doesn't have the same memory span also takes up to 16k tokens context that's why it currently only feeds specific parts of your code base and not the entire code base It's just ass overall ngl they should really use gpt 4 turbo as it has more context intake and they won't need to do their retarded pick and assume embedding hallucinations generations algorithm and turbo is also up to date with most libraries since 2023 knowledge 😉

Code suggestions actually use OpenAI Codex, it uses GPT-4 for the reasoning for some things and then dynamically adapts to use different models depending on how difficult and how much reasoning it thinks a certain part of the task needs (like GPT-3.5). GPT-4 Turbo has gotten an improvement and now it is about the same as GPT-4 and with the updated knowledge cutoff and larger context window, it is better than GPT-4 overall for coding. GPT-4 does have an 8k and a 32k context, but the one in copilot is the 8k context since it's cheaper.

fisforfaheem commented 10 months ago

They should enable it right now and save money as well

outgaze commented 10 months ago

It does use GPT-3.5 for some things though when it really shouldn't. - where? And regarding GPT4-turbo, many people think it's worse for coding than GPT-4, so I don't know...

It's not and for the code suggestions it does use gpt 3.5 but for the text and reasoning It's using gpt 4 so the code it outputs is still pure garbo as it doesn't use gpt4's instructions fine tuning and doesn't have the same memory span also takes up to 16k tokens context that's why it currently only feeds specific parts of your code base and not the entire code base It's just ass overall ngl they should really use gpt 4 turbo as it has more context intake and they won't need to do their retarded pick and assume embedding hallucinations generations algorithm and turbo is also up to date with most libraries since 2023 knowledge 😉

Code suggestions actually use OpenAI Codex, it uses GPT-4 for the reasoning for some things and then dynamically adapts to use different models depending on how difficult and how much reasoning it thinks a certain part of the task needs (like GPT-3.5). GPT-4 Turbo has gotten an improvement and now it is about the same as GPT-4 and with the updated knowledge cutoff and larger context window, it is better than GPT-4 overall for coding. GPT-4 does have an 8k and a 32k context, but the one in copilot is the 8k context since it's cheaper.

well it should use gpt 4 for both ngl cause it can't even solve a easy leet code question

fisforfaheem commented 9 months ago

Will someone from Gtihub reply? As they write amazing blogs on co pilot but dont listen to issues.

digitarald commented 9 months ago

No worries, the team has read and tagged it. We are staying on top of model releases as part of our multi-model approach, but we don't have an ETA for when Copilot Chat will use GPT-4-Turbo.

Meanwhile, please follow this issue to get updates and add your 👍.

fisforfaheem commented 9 months ago

Looking forward ⏩

On Tue, Nov 28, 2023, 3:46 AM Harald Kirschner @.***> wrote:

No worries, the team has read and tagged it. We are staying on top of model releases as part of our multi-model approach, but we don't have an ETA for when Copilot Chat will use GPT-4-Turbo.

Meanwhile, please follow this issue to get updates and add your 👍.

— Reply to this email directly, view it on GitHub https://github.com/microsoft/vscode-copilot-release/issues/560#issuecomment-1828753727, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIRXJSF5XMCROTJUYU26BV3YGUJ3JAVCNFSM6AAAAAA7FC6PUWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRYG42TGNZSG4 . You are receiving this because you commented.Message ID: @.***>

erhan0 commented 9 months ago

Excited for at least GPT-4 for coding suggestions, since it's been out for almost a year! Turbo after would be icing on the cake!

No worries, the team has read and tagged it. We are staying on top of model releases as part of our multi-model approach, but we don't have an ETA for when Copilot Chat will use GPT-4-Turbo.

Meanwhile, please follow this issue to get updates and add your 👍.

ChrisBuilds commented 9 months ago

No worries, the team has read and tagged it. We are staying on top of model releases as part of our multi-model approach, but we don't have an ETA for when Copilot Chat will use GPT-4-Turbo.

Meanwhile, please follow this issue to get updates and add your 👍.

Can you comment if GPT4 is being used in any part of Copilot at the moment? Which models are being used for chat, code, etc?

2-fly-4-ai commented 9 months ago

No worries, the team has read and tagged it. We are staying on top of model releases as part of our multi-model approach, but we don't have an ETA for when Copilot Chat will use GPT-4-Turbo.

Meanwhile, please follow this issue to get updates and add your 👍.

Give it to us nowwwwwwwww........ .. Pleaaaasseee. Tell Bill gates to stop messing around trying to recycle poo and get on this. Ask him if he could personally write the code to connect it up to gpt4turbo. Thanks.

I can code as well if you need. Just let your boy billy G know.

fisforfaheem commented 9 months ago

HAHA i hope the teams sees this! as being a flutter dev myself... i am missing out so much..due to old knowledge

On Thu, Dec 7, 2023 at 1:52 AM Brian farley @.***> wrote:

No worries, the team has read and tagged it. We are staying on top of model releases as part of our multi-model approach, but we don't have an ETA for when Copilot Chat will use GPT-4-Turbo.

Meanwhile, please follow this issue to get updates and add your 👍.

Give it to us nowwwwwwwww........ .. Pleaaaasseee. Tell Bill gates to stop messing around trying to recycle poo and get on this. Ask him if he could personally write the code to connect it up to gpt4turbo. Thanks.

I can code as well if you need. Just let your boy billy G know.

— Reply to this email directly, view it on GitHub https://github.com/microsoft/vscode-copilot-release/issues/560#issuecomment-1843672280, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIRXJSDHTSZRFDWNYEIG6KDYIDLKJAVCNFSM6AAAAAA7FC6PUWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBTGY3TEMRYGA . You are receiving this because you commented.Message ID: @.***>

HoverCatz commented 8 months ago

Updates?? Github copilot purchase page still says GPT 3.5 which is clearly ass as someone else here said image

erhan0 commented 8 months ago

By the time they add GPT-4, GPT 4.5 will be out...

b4git commented 8 months ago

image

fisforfaheem commented 8 months ago

copilot is giving 2021 answers :( in 2024 why i am doing flutter app dev and it thinks its 2021

On Tue, Jan 2, 2024 at 9:28 PM GalacticCreatures @.***> wrote:

image.png (view on web) https://github.com/microsoft/vscode-copilot-release/assets/63267629/58993179-5d38-4a50-8f9d-172cdfe20e03

— Reply to this email directly, view it on GitHub https://github.com/microsoft/vscode-copilot-release/issues/560#issuecomment-1874252164, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIRXJSGRCLYL4S43MAACXL3YMQYURAVCNFSM6AAAAAA7FC6PUWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZUGI2TEMJWGQ . You are receiving this because you commented.Message ID: @.***>

timkitch commented 7 months ago

Putting specifics related to any particular LLMs being used, I have a general observation here. Every LLM is eventually going to suffer from one issue or another. Let's say Copilot does shift to GPT-4 Turbo today. Tomorrow, OpenAI decides "we're not updating that model anymore - start using this new one". So, 6 months down the road, that model will also be outdated, at least per expectations of the developers I know.

It seems to me that relying on particular LLMs, combined with needing several months to turn the ship to use a different set of LLMs, is just going to continue to present problems for us users.

Any coding assistant I'm going to rely upon or recommend to my organization will be able to assist with recent APIs, frameworks, patterns and language features. For me, "recent" means like "at most 6 months out of date" (and that's really still far too outdated). I'm not talking about "fringe" APIs (like an obscure library someone decided to share that 2 people in the world are using). But as of today, I expect an assistant to be helpful with things like the technologies folks have discussed in this issue, as well things like Spring Boot 3.X (very popular and was GA'd well over a year ago) or the Langchain framework (also, very popular and released over a year ago). I have zero interest in developing a new Spring Boot application today using Spring Boot 2.X. Yesterday Copilot told me it's only aware of version 2.5, which EOL'd last year. Unless your job is fixing bugs in a very old codebase, that's totally useless.

I love the features that Copilot offers! But, only being able to help with what I think most of us would consider "legacy" technologies - especially, given the accelerating rate of change today - is a deal-breaker for me. This is why I'm looking at other coding assistants that allow me to choose my own LLM and toggle between GPT-4 Turbo, GPT 3.5 Turbo, a Mistral AI endpoint, Deepseek Coder running locally in Ollama, etc.

I don't need details about "how" this will be resolved. But, I'm surveying the field to make a recommendation to my org and, before I can recommend Copilot, I'd have to know the issue IS going to be resolved and some sort of ETA. And, not just resolved in a temporary way, but a way that keeps us up-to-date moving forward.

HoverCatz commented 7 months ago

Every LLM is eventually going to suffer from one issue or another. Let's say Copilot does shift to GPT-4 Turbo today. Tomorrow, OpenAI decides "we're not updating that model anymore - start using this new one". So, 6 months down the road, that model will also be outdated, at least per expectations of the developers I know.

Sounds to me like you want to stick to an old version X forever, because company Y will release a new version in the future. What a stupid idea not to upgrade. Sorry

hmorneau commented 7 months ago

In the mean time, anyone know if turbo is coming soon? Would be nice to give the option to use GPT4 or GPT4-Turbo directly in the extension setting.

winzig commented 7 months ago

Perhaps they're in the same boat many of us are in... waiting for GPT4-Turbo to leave "preview" mode and enter "production" mode? They've already updated the GPT4-Turbo model once since the first preview was announced last year.

So perhaps the better question for now is, when will GPT4-Turbo be production-ready?

timkitch commented 7 months ago

Every LLM is eventually going to suffer from one issue or another. Let's say Copilot does shift to GPT-4 Turbo today. Tomorrow, OpenAI decides "we're not updating that model anymore - start using this new one". So, 6 months down the road, that model will also be outdated, at least per expectations of the developers I know.

Sounds to me like you want to stick to an old version X forever, because company Y will release a new version in the future. What a stupid idea not to upgrade. Sorry

Ummm... maybe read my entire reply before commenting. I do not work on the Github team, which is clear from the rest of my response.

fisforfaheem commented 7 months ago

hate that not only the context windows is smaller than the free chat gpt version, but also the answers given are way 2021 era

On Fri, Feb 9, 2024 at 4:44 AM Tim Kitchens @.***> wrote:

Every LLM is eventually going to suffer from one issue or another. Let's say Copilot does shift to GPT-4 Turbo today. Tomorrow, OpenAI decides "we're not updating that model anymore - start using this new one". So, 6 months down the road, that model will also be outdated, at least per expectations of the developers I know.

Sounds to me like you want to stick to an old version X forever, because company Y will release a new version in the future. What a stupid idea not to upgrade. Sorry

Ummm... maybe read my entire reply before commenting. I do not work on the Github team, which is clear from the rest of my response.

— Reply to this email directly, view it on GitHub https://github.com/microsoft/vscode-copilot-release/issues/560#issuecomment-1935104288, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIRXJSGMOZR3LHNULXGXNLDYSVPMLAVCNFSM6AAAAAA7FC6PUWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMZVGEYDIMRYHA . You are receiving this because you commented.Message ID: @.***>

dzirtt commented 7 months ago

This is a needed improvement

hmorneau commented 7 months ago

I wonder if they could do like microsoft copilot and perform search before answering. I have tried Copilot Pro and selected GPT4 and it works great because it pull fresh data before answering a question.

hal1984 commented 7 months ago

Would be great if Github Copilot start using the last GPT4 Turbo versions. The last one had been trained with data up to December 2023:

https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo

For example neither Chat or inline suggestion works wells with Angular Signals.

monko9j1 commented 6 months ago

Ended my Github Copilot until this is resolved. How ridiculous, not even a single recent response from the team either?

fisforfaheem commented 6 months ago

agree, the team is more focused on DIVERSITY, and less useful features.. LIKE Adding Hindi, telugu etc...instead of fixing main issues

On Fri, Feb 23, 2024 at 9:10 PM Dobo_J @.***> wrote:

Ended my Github Copilot until this is resolved. How ridiculous, not even a single recent response from the team either?

— Reply to this email directly, view it on GitHub https://github.com/microsoft/vscode-copilot-release/issues/560#issuecomment-1961603976, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIRXJSFTNYYAOQ4KXDPCBR3YVC5P3AVCNFSM6AAAAAA7FC6PUWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRRGYYDGOJXGY . You are receiving this because you commented.Message ID: @.***>

kurutah commented 6 months ago

ETA for when Copilot Chat will use GPT-4-Turbo

We could wait for years... To be honest I can get that creating new GPT models is hard and expensive, but leave us with this inline chat that is just impossible to use (I am not sure what benchmarks they are talking about it's just unusable) for many months when they already have a new and fast model, but don't want to implement that... with some weird excuses... I can't get it.

But Gemini could help with the ETA for the Copilot chat. It's only one reason they can move forward at this point.

jiantosca commented 6 months ago

Nothing new here in my comment ...just dropped in to say copilot was great when I started a personal project to learn something new, but as soon as I used it with something I'm familiar with I quickly realized how outdated it is. Was bummed to find out copilot thinks the latest version of java is 16, and spring boot is 2.5.

HoverCatz commented 6 months ago

I'm stopping my subscription for Github Copilot right now, until they get their hands out of their ass and upgrade to GPT 4, or add Gemini support.

hmorneau commented 6 months ago

@HoverCatz I did the same, I think they will always be lagging behind simply due to the size of the organization. By the time everything get approved and tested an other model have time to come out. We will get Turbo and GPT-5 will be around the corner. Anyway now the best model for programming is Claude 3 Opus which they probably won't offer for obvious reason.

hlhdaiaaii commented 6 months ago

I moved from Copilot to Cody after two years because GPT 4 Turbo (which now supports Claude 3) in Cody is much better than GPT 4 in Copilot chat. I believe that Copilot's auto-completion is its best feature, and using Cody's auto-completion has really made me miss it. I wish I could use Cody's chat feature and Copilot's auto-completion feature without having to purchase two subscriptions at once.

FYI, I have been cloning and customizing a huge open source code base, so Cody's features do a better job of understanding the entire code base with GPT 4 Turbo.

hmorneau commented 6 months ago

@hlhdaiaaii You should try Codeium for auto complete, for me suggestion are better than Github Copilot and it's free to use. Cody chat on Claude 3 Opus + Codeium autocomplete is where it's at right now. If you need a larger context window, you can use Phind pro (32k token vs 8k token for Cody).

I hope Github Copilot update soon, but then 4.5 or or 5 will release and the same story will repeat itself.

fisforfaheem commented 5 months ago

moved to CODY, as they are using latest tech, copilot, has become dumb and also have no idea of the full project, :(. it should know all the project

On Sat, Mar 23, 2024 at 5:01 PM hmorneau @.***> wrote:

@hlhdaiaaii https://github.com/hlhdaiaaii You should try Codeium for auto complete, for me suggestion are better than Github Copilot and it's free to use. Cody chat on Claude 3 Opus + Codeium autocomplete is where it's at right now.

— Reply to this email directly, view it on GitHub https://github.com/microsoft/vscode-copilot-release/issues/560#issuecomment-2016470272, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIRXJSGXDD7ZQSUD3AOL6I3YZVVIRAVCNFSM6AAAAAA7FC6PUWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJWGQ3TAMRXGI . You are receiving this because you commented.Message ID: @.***>

kurutah commented 5 months ago

@hlhdaiaaii You should try Codeium for auto complete, for me suggestion are better than Github Copilot and it's free to use. Cody chat on Claude 3 Opus + Codeium autocomplete is where it's at right now. If you need a larger context window, you can use Phind pro (32k token vs 8k token for Cody).

I hope Github Copilot update soon, but then 4.5 or or 5 will release and the same story will repeat itself.

Thx for the advice, codeium autocomplete is great. Don't you think their chat is also good?

lramos15 commented 5 months ago

I've gone ahead and locked this thread as it has gotten off topic.

We understand everyone's frustration here and are working diligently behind the scenes to upgrade the models while ensuring we do not regress any of the core scenarios that drive Copilot Chat. We hope to share more soon, thank you for understanding.

digitarald commented 2 months ago

Over the past weeks we have rolled out GPT-4-Turbo to all GitHub Copilot Chat users on the latest extension version 0.16+.

GPT-4-Turbo is now used for all answers in the GitHub Copilot's chat panel. The most noticeable benefits of GPT-4-Turbo are faster performance and a more recent knowledge cutoff (December 2023). We still rely on other models informed by offline and online evaluations that balance quality and performance, picking the best models for the job.

Please report any problems with this update as new issues.

digitarald commented 1 month ago

Adding for posterity, GPT-4o is now used in Copilot Chat version 0.17 and up: https://aka.ms/github-copilot-chat-gpt-4o

digitarald commented 2 days ago

GPT-4o is fully rolled out to Inline Chat (versions 0.19 and above): https://github.com/microsoft/vscode-copilot-release/issues/664#issuecomment-2356320222

Relatedly, Chat view pane has been using 4o since 0.17, and got a larger context window in the past weeks.

Happy Smart Coding!