Pythagora-io / gpt-pilot

The first real AI developer
Other
29.14k stars 2.9k forks source link

[Bug]: Error calling LLM API: The request exceeded the maximum token limit (request size: 574796) tokens. #668

Open 93michaelnash opened 4 months ago

93michaelnash commented 4 months ago

Version

VisualStudio Code extension

Operating System

Windows 10

What happened?

About 45 steps in to a build (I've tried starting afresh and this has happened a couple of times now), I receive the error: Error calling LLM API: The request exceeded the maximum token limit (request size: 574796) tokens. I don't know why the token is so big for one, but it has no fail-safe and it simply crashes - I try loading it back to a previous step and as soon as I get to this point again, it fails.

I'm trying to create an Ionic app with Firebase integration, it has literally only set-up the app and is handling some configuration before it crashes.

The task it claims to be attempting before the error appears: Implementing task #2: Implement Firebase Authentication service within the Ionic app utilizing the Firebase SDK. This task includes setting up the provider configurations for Facebook, Google, and Instagram in the Firebase console. Developers should implement a Login Page using Ionic components that offers users the option to sign in using these SSO providers. This also includes creating an authentication guard to protect routes that require a user to be logged in.

It may be a case of the task itself is too large / needs breaking down more? Unsure.

Please advise if there is a workaround / I'm doing something wrong / this is something to fix.

Thanks, Michael

93michaelnash commented 4 months ago

Any update on this one? Thanks

techjeylabs commented 2 months ago

hey there, i highly suggest you to have a look at our wiki section. I can only have a guess on this, but assuming your given problems it sounds like the context is simply to big. You should have a look at, how to exclude file paths.

You can do this by editing your .env file in /gpt-pilot/pilot/.env

# Set extra buffer to wait on top of detected retry time when rate limmit is hit. defaults to 6
# RATE_LIMIT_EXTRA_BUFFER=
IGNORE_PATHS=./dir1,./dir2