cvlab-columbia / viper

Code for the paper "ViperGPT: Visual Inference via Python Execution for Reasoning"
Other
1.63k stars 117 forks source link

Code not moving across codex_helper #18

Closed shivanshpatel35 closed 1 year ago

shivanshpatel35 commented 1 year ago

Hi, When I run main_simple.ipynb in debugger mode, I notice that the code never passes over codex_helper here. It repeatedly runs the same function but doesn't move across it. What can be the possible solution to this?

Thanks

surisdi commented 1 year ago

Hi, Codex was discontinued by OpenAI. You can try using the chat versions (gpt3-turbo or gpt4) instead.

shivanshpatel35 commented 1 year ago

Hi, I changed the model here. But it still gets stuck at the same point here. Little bit of debugging leads to the following error log:

This model's maximum context length is 4097 tokens. However, you requested 4423 tokens (3911 in the messages, 512 in the completion). Please reduce the length of the messages or completion.

Do you have any suggestions on how to avoid this? My query is of reasonable length with 12 words.

surisdi commented 1 year ago

Hi,

The problem is that gpt-3.5-turbo has a context size of 4097, which is half the context size that Codex (code-davinci-002) had. The solution for now is to remove the video part of the prompt (VideoSegment class). Or, if you want to work with videos, remove functions from ImagePatch that are not required for your application. We implemented the former in the chatapi.prompt file. In order to use it, change this line (note that this is the default now).

chatapi.prompt is also required when working with chat versions of GPT independently of the context size, because the completion does not work as well (it is not a completion model, but a chat model), so it needs to be prompted slightly differently. We did not test this prompt/model, so you may have to play a bit with the prompt.

shivanshpatel35 commented 1 year ago

Removing VideoSegment from prompts/api.prompt solves the issue. Thanks!