Closed sdcanny closed 7 months ago
Yes, I noticed the same thing. I was highlighting my file today and asking it questions, and it was giving me random answers. In VSCode Insiders, it says 400 Bad Request. My code is around 300 lines and around 3.5k tokens. So they must have reduced the token limit recently. We need at least 8k, since I know their GPT-3.5 Turbo model is up to 8k.
they could not have tested this product update -- they pushed update without testing it one bit.
@datanetai @sdcanny I assume both your issues are with chat panel, not inline chat. Correct?
@datanetai @sdcanny I assume both your issues are with chat panel, not inline chat. Correct?
Yes, it used to happen when I highlighted a large file. However, now it's no longer happening in files with about 200-300 lines of code. It appears that the context size has increased.
New releases do usually not decrease context window; but we iterate how we rank and slice the various available context parts. So detailed bug reports of cases that seemed to have worked before but stopped working would help most.
@sdcanny your logs don't show chat invocations but ghost-text completions. Can you clarify and reproduce on 0.12?
I’m having the same issue. I’ve even tried a full clean reinstall if vs code, python and copilot but cant get it to read my code snippet in context.
Most issues here seem large file related.
Duplicate of #1083
For me it happens in any script with over 80 lines. Also, relatedly, Copilot misreads the line numbers so is often looking at the wrong thing. Eg if i ask what is on line 41 it says the wrong thing. I assume that is also why it keeps getting the context of the selected code wrong.
I've searched the bug reports and found a few similar issues, but I have not seen any of them resolved and they typically don't provide much information. I did not see anything pertaining to this issue in the Github documentation, either.
I was successfully using Copilot for about a month and thoroughly enjoying its features, but within last few days it's been incredibly hit or miss with actually being able to see Workspace data or highlighted code snippets. The Chat window almost never works and the Inline Chat has been degrading in reliability. I have tried opening individual documents, opening my workspace files, and more. It may have something to do with the length of the code in the file, but we're talking <500 lines and I figure that shouldn't be an issue.
Symptoms typically start with "As an AI," followed by some excuse about how I failed to provide the code. Using @ workspace will occasionally vary the output, but it seems to get worse as the project grows. Sometimes it will make remarks about the first 30 lines or so of my code, if I ask it to describe something or to evaluate my code.
Things I've tried:
I've tested both the regular and the Insider's Build, both have the same issues.
Steps to Reproduce:
INFO] [code-referencing] [2024-01-04T00:34:05.479Z] Public code references are enabled. [INFO] [default] [2024-01-04T00:43:39.335Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex [INFO] [default] [2024-01-04T00:43:51.955Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex [INFO] [default] [2024-01-04T00:43:52.207Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex [INFO] [default] [2024-01-04T00:43:52.517Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex [INFO] [default] [2024-01-04T00:43:53.396Z] request.response: [https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions] took 879 ms [INFO] [streamChoices] [2024-01-04T00:43:53.400Z] solution 0 returned. finish reason: [stop] [INFO] [streamChoices] [2024-01-04T00:43:53.403Z] request done: headerRequestId: [6f4ea3a6-d5fd-4bf0-99eb-60845c1b3370] model deployment ID: [wb5041db867a5] [INFO] [ghostText] [2024-01-04T00:44:54.007Z] Cancelled during debounce [INFO] [default] [2024-01-04T00:44:54.123Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex [INFO] [default] [2024-01-04T00:45:00.192Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex [INFO] [default] [2024-01-04T00:45:00.684Z] request.response: [https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions] took 492 ms [INFO] [streamChoices] [2024-01-04T00:45:00.689Z] solution 0 returned. finish reason: [stop] [INFO] [streamChoices] [2024-01-04T00:45:00.693Z] solution 1 returned. finish reason: [stop] [INFO] [streamChoices] [2024-01-04T00:45:00.695Z] solution 2 returned. finish reason: [stop] [INFO] [streamChoices] [2024-01-04T00:45:00.696Z] request done: headerRequestId: [ba30472c-2740-4380-a9ac-4c1fd3877cb9] model deployment ID: [wb5041db867a5] [INFO] [default] [2024-01-04T00:45:02.314Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex [INFO] [default] [2024-01-04T00:45:02.801Z] request.response: [https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions] took 486 ms [INFO] [streamChoices] [2024-01-04T00:45:02.804Z] solution 0 returned. finish reason: [stop] [INFO] [streamChoices] [2024-01-04T00:45:02.808Z] solution 1 returned. finish reason: [stop] [INFO] [streamChoices] [2024-01-04T00:45:02.811Z] solution 2 returned. finish reason: [stop] [INFO] [streamChoices] [2024-01-04T00:45:02.812Z] request done: headerRequestId: [435622f0-4436-44e7-9f71-5ff79cea74c6] model deployment ID: [wb5041db867a5] [INFO] [default] [2024-01-04T00:45:05.157Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex [INFO] [default] [2024-01-04T00:45:05.662Z] request.response: [https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions] took 505 ms [INFO] [streamChoices] [2024-01-04T00:45:05.664Z] solution 0 returned. finish reason: [stop] [INFO] [streamChoices] [2024-01-04T00:45:05.666Z] request done: headerRequestId: [a7bfe358-125b-413c-b2f3-4fbe10d42826] model deployment ID: [wb5041db867a5] [INFO] [default] [2024-01-04T00:45:17.295Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex [INFO] [default] [2024-01-04T00:45:17.639Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex [INFO] [default] [2024-01-04T00:45:18.016Z] request.response: [https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions] took 377 ms [INFO] [streamChoices] [2024-01-04T00:45:18.018Z] solution 0 returned. finish reason: [stop] [INFO] [streamChoices] [2024-01-04T00:45:18.019Z] request done: headerRequestId: [d22ec6ce-2166-40d5-bc51-3a8767b11f87] model deployment ID: [wb5041db867a5] [INFO] [default] [2024-01-04T00:45:18.888Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex [INFO] [default] [2024-01-04T00:45:19.270Z] request.response: [https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions] took 382 ms [INFO] [streamChoices] [2024-01-04T00:45:19.277Z] solution 0 returned. finish reason: [stop] [INFO] [streamChoices] [2024-01-04T00:45:19.281Z] solution 2 returned. finish reason: [stop] [INFO] [streamChoices] [2024-01-04T00:45:19.285Z] solution 1 returned. finish reason: [stop] [INFO] [streamChoices] [2024-01-04T00:45:19.286Z] request done: headerRequestId: [fd9a7ca5-0875-4813-be87-6401392422af] model deployment ID: [wb5041db867a5] [INFO] [default] [2024-01-04T00:45:19.867Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex [INFO] [default] [2024-01-04T00:45:20.355Z] request.response: [https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex/completions] took 487 ms [INFO] [streamChoices] [2024-01-04T00:45:20.357Z] solution 0 returned. finish reason: [stop] [INFO] [streamChoices] [2024-01-04T00:45:20.359Z] request done: headerRequestId: [f3312a8a-41e9-4004-8dbb-cdef1073ba94] model deployment ID: [wb5041db867a5]