Significant-Gravitas / AutoGPT

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
https://agpt.co
MIT License
167.23k stars 44.18k forks source link

Auto-switch to gpt-35-turbo, gpt-4 and gpt-4-32k when number of tokens exceeded by query #3367

Closed NKT00 closed 1 year ago

NKT00 commented 1 year ago

Duplicates

Summary 💡

There's a few bug reports close to this, but, would it not make sense to get rid of the error SYSTEM: Command get_text_summary returned: Error: This model's maximum context length is 4097 tokens. However, your messages resulted in 5113 tokens. Please reduce the length of the messages. by simply swapping model, just for that query, when the length of the query is below the limit for another model?

gpt-35-turbo is 4096 tokens, whereas the token limits for gpt-4 and gpt-4-32k are 8192 and 32768 respectively. This could be implemented easily.

Examples 🌈

No response

Motivation 🔦

Everything that pulls a website page fails, as the webpages are too big, generally. However, some are only slightly too big, and could be run through a different model to downsize them first.

dimitar-d-d commented 1 year ago

I strongly support this proposal. This should be easy to implement and would definitely help make this thing being actually useful.

Boostrix commented 1 year ago

you could probably use some sort of preprocessor/preparation stage prior to passing such contexts to the LLM

dimitar-d-d commented 1 year ago

you could probably use some sort of preprocessor/preparation stage prior to passing such contexts to the LLM

Yep. I could do that and I do it. I end up splitting my text assignments into three separate runs of AutoGPT just to not to get an error... This, however, is time consuming and impractical.

The ability of the tool to dynamically call the larger LLM model when applicable, and combined with better chunking, will definitely reduce the amount of fatal errors.

Rykimaruh commented 1 year ago

has there been any workaround to this? I thought auto-gpt used gpt-4 which had greater token limit than 3.5 but I'm still getting the 4097 max token limit

Boostrix commented 1 year ago

it depends on the level of OpenAI API access you've got

anonhostpi commented 1 year ago

I'd like to say that there should be a switching mechanism that switches between all of the supported APIs, not just OpenAI's models/APIs.

@p-i- perhaps if and when the repository gets around to implementing the APIs as plugins, maybe add a plugin object that reports the rate limit associated with that API, so that AutoGPT can completely switch plugins, not just models.

Boostrix commented 1 year ago

t there should be a switching mechanism that switches between all of the supported APIs, not just OpenAI's models/APIs.

Which seems to be work in progress #2158

maybe add a plugin object that reports the rate limit associated with that API, so that AutoGPT can completely switch plugins, not just models.

:+1: the basic idea is this #3466

anonhostpi commented 1 year ago

love ya boostrix, which one are you on the Discord Server?

Boostrix commented 1 year ago

I'd like to say that there should be a switching mechanism that switches between all of the supported APIs, not just OpenAI's models/APIs.

that's a form of feature scaling, and #3466 - #528

but agreed, if one model fails, there should be an option try another one - even if that's not the preferred one

kinance commented 1 year ago

To fix this issue, the batch summarization approach introduced by the PR #4652 can also be applied to summarize_text function in text.py

unitythemaker commented 1 year ago

gpt-3.5-turbo-16k is here.

github-actions[bot] commented 1 year ago

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

github-actions[bot] commented 1 year ago

This issue was closed automatically because it has been stale for 10 days with no activity.

prathamesh-0909 commented 6 months ago

This issue was closed automatically because it has been stale for more than 10 days without any activity.