Open hamperia4 opened 8 months ago
So I am looking into this issue, the thing is OPENAI has a token limit and that's common for all models. The only way to mitigate this is to make it streamable and I am working on this correction this is only an issue with NMAP and its related output I will be working on this and maybe in the next update, this will be corrected.
Thanks for letting me know 👍
Thank you for the feedback. Yes i agree as due to nmap responses this can be expected for openai. Just out of curiosity, even if you use bigger models like gpt-4-0125-preview and expand the max_token value from 2500 to 4096 this will still remain?
Thank you for the feedback. Yes I agree as due to Nmap responses this can be expected for Openai. Just out of curiosity, even if you use bigger models like gpt-4-0125-preview and expand the max_token value from 2500 to 4096 this will remain?
I guess yes. It boils down to how the AI can process. And what is the optimal parameter for accuracy? I am pretty sure that's what is going on here. Maybe openai limits the input tokens to maintain a consistent accurate output 🤷♂️
On my error i can see that actually openai provides the limit as the maximum context length is 16385 tokens, but i requested for 17216 tokens (14716 in your prompt, and 2500(max_token value) for the completion) I believe that in gpt4 model this will not be an issue. Let me test this and get back to you.
On my error i can see that actually openai provides the limit as the maximum context length is 16385 tokens, but i requested for 17216 tokens (14716 in your prompt, and 2500(max_token value) for the completion) I believe that in gpt4 model this will not be an issue. Let me test this and get back to you.
So the thing is the output limit and input limits are different gpt-4 has more output and input capacity but can't leverage that properly with direct prompts like what I am doing so I need to modify that
Hello, as tested in multiple models this is still the same so you are correct. Is there any update?
I was kinda busy with uni exams and stuff so not able to work on this 😅. I will work on this when I get the time for it.
Hello Playing around with the project, after running nmap with OpenAI using profile 5 or 12 i come with the error:
"message: "This model's maximum context length is 16385 tokens, however you requested 17216 tokens (14716 in your prompt; 2500 for the completion). Please reduce your prompt; or completion length.","
I'm getting this message after changing the model from gpt-3.5-turbo-0613 to gpt-3.5-turbo-0125 What would be the best approach here?
Thank you for your time.