Today I came across an issue where the amount of tokens in my prompt for a regular :ChatGPT command was out of sync with what I actually typed. For example (copied from the :ChatGPT panel main window):
What is 2 + 2?
This model's maximum context length is 16385 tokens. However, you requested 16808 tokens (15808 in the messages, 1000 in the completion). Please reduce the length of the messages or completion.
I have two theories on what might be happening:
Maybe it's sending the entire panel history as part of the payload
I just installed Copilot and I'm wondering if the copilot suggestions are being included somehow even though they do not appear in the main window of the :ChatGPT panel after I press enter
Is there a way I can easily look at the exact API request and response info in order to debug?
Today I came across an issue where the amount of tokens in my prompt for a regular
:ChatGPT
command was out of sync with what I actually typed. For example (copied from the:ChatGPT
panel main window):I have two theories on what might be happening:
:ChatGPT
panel after I press enterIs there a way I can easily look at the exact API request and response info in order to debug?