passionweb-manuel-schnabel / ai-seo-helper

Generates SEO metadata based on content using AI. Currently several metadata for pages and articles of EXT:news can be generated using an additional button next to the corresponding input fields.
9 stars 5 forks source link

Error 400 - Too many tokens used for a call error? #2

Closed janslu closed 1 year ago

janslu commented 1 year ago

Hi, I'm testing this extension (could be really useful for one of my sites) and I'm getting the following error:

Client error: `POST https://api.openai.com/v1/completions` resulted in a `400 Bad Request` response:
message: This model's maximum context length is 4097 tokens, however you requested 7814 tokens (67 (truncated...)

I'm using default settings (with my trial API Key). This is for text page with 4 paragraphs of text... I'm on 11.5.24 with Toast Premium installed if it matters...

Any idea what this might be about?

passionweb-manuel-schnabel commented 1 year ago

Hi Jan,

thanks for your feedback and reporting the issue.

Unfortunately, the OpenAI API (currently) limits the maximum number of tokens per request depending on the model used. (e.g. "text-davinci-003" is limited to 4096 tokens). You can find a detailed overview of models and the maximum number of tokens here:

https://platform.openai.com/docs/models/gpt-3 (The 4000 for "text-davinci-003" still seems to be incorrect in the documentation, since the error message and the OpenAI playground reports a different limit)

Based on the current status of the extension, it is unfortunately not yet possible to analyze larger texts.

A possible option would be to calculate the characters/tokens of the text in advance and to split the request into several API requests if the limit is exceeded. In this way, the individual text blocks could be summarized recursively.

In any case, I will try to provide a solution for this in one of the upcoming versions.

janslu commented 1 year ago

@passionweb-manuel-schnabel - thanks for your reply! I wasn't sure what token means in this context, now it's clear. Thank you... I think worst case scenario is maybe sending trimmed version of the content - hard limiting the length sent out...

passionweb-manuel-schnabel commented 1 year ago

A hard limit and simply shortening the text would really be the worst case.

I was thinking more of a variant in which you first determine the length and then, if necessary, divide the entire content at suitable points. Possibly after a complete block of text or you orientate yourself on the IDs of the content elements and try to find a suitable place here.

I think this just has to be tried. And if it's complete garbage, then it'll just go away and we'll have to find another solution ;-)

Of course, the easiest way would be if OpenAI simply increases the limit. But I think there are reasons why there is one :-D

passionweb-manuel-schnabel commented 1 year ago

Hi Jan,

I did some research on the problem and pushed my approach to the master branch.

Here's what I did/changed:

I hope that I have summarized all the essential information for you in an understandable way and that you can successfully analyze your site with the adjustments. If you have any questions, just let me know

I look forward to your feedback once you have tested it.