JusticeRage / Gepetto

IDA plugin which queries uses language models to speed up reverse-engineering
GNU General Public License v3.0
2.87k stars 263 forks source link

Content length too long #9

Closed Puyodead1 closed 1 year ago

Puyodead1 commented 1 year ago

Hi, I'm trying this on a function that is a bit large, and unfortunately it returns the error davinci-003 could not complete the request: This model's maximum context length is 4097 tokens, however you requested 7196 tokens (5946 in your prompt; 1250 for the completion). Please reduce your prompt; or completion length..

Is there a way to get around this and still be able to use this plugin on large functions?

ScSofts commented 1 year ago

image I think you may pay some money to solve the problem. Please take a look at https://openai.com/api/pricing/

JusticeRage commented 1 year ago

The plugin is already using the most expensive model (Davinci), and there is a hard limit in the API for the size of queries. First, I will do a sanity check to avoid sending the queries that are too big and display an error message. The next step will be, as a workaround, to allow sending only parts of a function for analysis.

There are other models with different limits, but my initial testing shows that the results are very unreliable.

ScSofts commented 1 year ago

The plugin is already using the most expensive model (Davinci), and there is a hard limit in the API for the size of queries. First, I will do a sanity check to avoid sending the queries that are too big and display an error message. The next step will be, as a workaround, to allow sending only parts of a function for analysis.

There are other models with different limits, but my initial testing shows that the results are very unreliable.

Maybe we can do it offline?

ScSofts commented 1 year ago

I have seen serval opensource models like GPT-Neo which can be used to do the same thing

JusticeRage commented 1 year ago

As far as I can tell, other models provided by OpenAI do not work as well when it comes to explaining code unfortunately. I'll close this issue for now, until such a time that a better model becomes available. For now, I don't think it's worth adding options that are (again, based on my testing), strictly worse.

The initial request, handling the content-length issue was resolved by commit c16c482 as far as I can tell.