cvlab-columbia / viper

Code for the paper "ViperGPT: Visual Inference via Python Execution for Reasoning"
Other
1.63k stars 117 forks source link

Problem with maximum context length using text-davinci-003 #8

Closed cathyxl closed 1 year ago

cathyxl commented 1 year ago

Hi, since codex is not available anymore, I've tried to use text-davinci-003, but openai is always sending back the error.

"This model's maximum context length is 4097 tokens, however, you requested 5270 tokens (4758 in your prompt; 512 for the completion). Please reduce your prompt; or completion length."

How do you deal with the max context length problem?

surisdi commented 1 year ago

Hi, Codex (code-davinci-002) had context size of 8192, therefore this was not a problem. In order to avoid having this issue, you may want to remove the VideoSegment part of the prompt, if you only want to apply it to images. See the prompt we released for the chat versions for an example. Otherwise (if video is important), you can try removing methods from ImagePatch that are not necessary for your use case.