Open worksofliam opened 4 days ago
I have a colleague who is checked out with the same code and the issue is not happening for them.
More requests that finish but seemingly cut off.
2024-07-01 15:51:47.308 [info] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 1089 ms
2024-07-01 15:52:02.460 [info] [streamMessages] message 0 returned. finish reason: [stop]
2024-07-01 15:52:02.461 [info] [streamChoices] request done: requestId: [2f06e0be-77a4-40b3-a539-f794b69e916e] model deployment ID: []
2024-07-01 15:53:31.062 [info] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 1203 ms
2024-07-01 15:53:34.183 [info] [streamMessages] message 0 returned. finish reason: [stop]
2024-07-01 15:53:34.184 [info] [streamChoices] request done: requestId: [9e77cd62-0045-4818-9f29-26f48f0aa608] model deployment ID: []
I have recreate this issue with the vscode-extension-samples -> chat-sample: https://github.com/microsoft/vscode-extension-samples/tree/main/chat-sample
2024-07-01 16:00:55.412 [info] [chat fetch] request.response: [https://api.githubcopilot.com/chat/completions], took 329 ms
2024-07-01 16:00:55.929 [info] [streamMessages] message 0 returned. finish reason: [stop]
2024-07-01 16:00:55.929 [info] [streamChoices] request done: requestId: [94db1a75-8a4f-4dba-814c-6fddb30bc239] model deployment ID: [dep-3]
So you're seeing the issue with the chat-sample, and no modifications to it? A couple days later, is this still happening? Could have been a transient service issue.
It looks like you found the "Copilot: Collect Diagnostics" command, thanks. Could you also run "Developer: GitHub Copilot Chat Diagnostics"?
I'm not sure what else to look at, any ideas @jrieken?
What version of VS Code and chat do you use? Around a week ago there was a bug in this area which since has been fixed
My Copilot responses are being cut of when using the API to implement my chat. It doesn't seem to happen all the time.
Code
```ts async function streamModelResponse( messages: vscode.LanguageModelChatMessage[], stream: vscode.ChatResponseStream, token: vscode.CancellationToken ) { const chosenProvider = AiConfig.getProvider(); const chosenModel = AiConfig.getModel(); if (chosenProvider === `none`) { stream.markdown( `No AI provider selected. Please select an AI provider and model.` ); stream.button({ command: `vscode-db2i.ai.changeModel`, title: `Select AI Provider and Model`, }); return; } showModelProviderIfNeeded(stream, chosenProvider, chosenModel); stream.progress(`Provider: ${chosenProvider} Model: ${chosenModel}`); return copilotRequest(chosenModel, messages, {}, token, stream); } async function copilotRequest( model: string, messages: LanguageModelChatMessage[], options: LanguageModelChatRequestOptions, token: CancellationToken, stream: vscode.ChatResponseStream ): PromiseMore details with screenshots
When using standard copilot, it does not cut off. Could it be the case that my input/context is too big? ![image](https://github.com/microsoft/vscode-copilot-release/assets/3708366/e447995c-a071-4ee5-93e0-d76cbd8a5ac6) Further progress using a different model but same issue. ![image](https://github.com/microsoft/vscode-copilot-release/assets/3708366/9631e8f2-1ea2-4457-92ab-60fa8bc21586) Took out all my User messages / context and still getting the same issue. ![image](https://github.com/microsoft/vscode-copilot-release/assets/3708366/b9868d44-6d34-45be-b3af-96f5abf4ca49)With logs: