microsoft / vscode-copilot-release

Feedback on GitHub Copilot Chat UX in Visual Studio Code.
https://marketplace.visualstudio.com/items?itemName=GitHub.copilot-chat
Creative Commons Attribution 4.0 International
314 stars 28 forks source link

GPT 4 still not enabled it says its using gpt 3 from 2021?? why #714

Closed fisforfaheem closed 8 months ago

fisforfaheem commented 8 months ago

i cant use this even though it shoudl be

mjbvz commented 8 months ago

Where does it say that?

dkampien commented 8 months ago
2024-01-04 23:41:33.363 [info] [chat fetch] engine https://api.githubcopilot.com/chat
2024-01-04 23:41:33.363 [info] [chat fetch] modelMaxTokenWindow 4096
2024-01-04 23:41:33.363 [info] [chat fetch] chat model gpt-4
2024-01-04 23:41:34.613 [info] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 1250 ms
2024-01-04 23:41:34.615 [info] [streamMessages] message 0 returned. finish reason: [stop]
2024-01-04 23:41:34.615 [info] [streamChoices] request done: requestId: [115bfc0a-2519-43d6-85ff-e0cdfa0031a2] responseId: [8e41cd2a-37b1-446f-8f64-3a6b7c954d5b] model deployment ID: [x8d3c95500d67]
2024-01-04 23:41:34.617 [info] [chat fetch] engine https://api.githubcopilot.com/chat
2024-01-04 23:41:34.617 [info] [chat fetch] modelMaxTokenWindow 8192
2024-01-04 23:41:34.617 [info] [chat fetch] chat model gpt-3.5-turbo
2024-01-04 23:41:35.163 [info] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 546 ms
2024-01-04 23:41:35.164 [info] [streamMessages] message 0 returned. finish reason: [stop]
2024-01-04 23:41:35.165 [info] [streamChoices] request done: requestId: [8ff7299c-3ba1-41ea-9725-3538410b1f8a] responseId: [fcb40ba9-7a3d-4363-892e-b13274b718bb] model deployment ID: [xd71d885bb108]
2024-01-04 23:41:51.891 [info] [chat fetch] engine https://api.githubcopilot.com/chat

It starts with gpt-4 then it switches to 3.5 mid-answer.

mjbvz commented 8 months ago

Those are different requests. We use different models for different purposes as they have different performance/cost tradeoffs. One is not strictly better than the other