Pythagora-io / gpt-pilot

The first real AI developer
Other
29.57k stars 2.94k forks source link

API responded with status code: 429. Rate limit reached for 10KTPM-200RPM in organization org-WyXXXXXXXXX #17

Closed akamalov closed 11 months ago

akamalov commented 1 year ago

Getting immediate:

API responded with status code: 429. Response text: {
    "error": {
        "message": "Rate limit reached for 10KTPM-200RPM in organization org-WyXXXXXXXXX on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.",
        "type": "tokens",
        "param": null,
        "code": "rate_limit_exceeded"
    }
}

This is my first attempt to access OpenAPI for today and I am already getting this error. I am running other applications to generate python code and I am not getting this error.

zenchantlive commented 1 year ago

i am having the same issue

hafizSiddiq7675 commented 1 year ago

Same issue

zvone187 commented 1 year ago

This happens when you have a small limit on the number of tokens per minute. OpenAI puts 10k tokens per minute by default which is too little for GPT Pilot, but you can request a limit increase from OpenAI.

CyKiller commented 1 year ago

We should add a step - to improve the pilot and allow for user feedback during confirmations and error handling, we can modify the create_gpt_chat_completion function.

Firstly, instead of just asking the user to press ENTER to confirm, we can use the questionary library to create a more interactive prompt. Secondly, in the case of an error, we can ask the user for advice or feedback before deciding whether to retry the request or not. When an exception occurs, we can have the code now asks the user for advice or feedback using questionary.text. Our input is then printed out. We can replace this print statement with any action we want to perform with the user's feedback. I think that should halt the process which could be in a loop or over token usage where we can contain the control point for now with this more terminal approach for a succeeding request for an answer after an error.

We can then use the user's feedback as we see fit at that task need. For example, we could log it, use it to alter the program's behavior, or even send it back to the server for further analysis. These changes should help make the program more interactive and responsive to our input, and could potentially help avoid issues like infinite loops or excessive token usage with a limit or not.

Zate commented 1 year ago

This happens when you have a small limit on the number of tokens per minute. OpenAI puts 10k tokens per minute by default which is too little for GPT Pilot, but you can request a limit increase from OpenAI.

No they do not up the limit on GPT-4 according to their own docs and forms.

I'd love to see a combination of using GPT 3.5 Turbo for places where it doesnt matter, with GPT4 just used for the important pieces.

Be nice to have it implement some kind of automated handling of the rate limit such as like https://help.openai.com/en/articles/5955604-how-can-i-solve-429-too-many-requests-errors or similar.

zenchantlive commented 1 year ago

I completely agree with you! There s no ability to up our limits unfortunately.

On Thu, Aug 31, 2023, 10:57 PM Zate @.***> wrote:

This happens when you have a small limit on the number of tokens per minute. OpenAI puts 10k tokens per minute by default which is too little for GPT Pilot, but you can request a limit increase from OpenAI.

No they do not up the limit on GPT-4 according to their own docs and forms.

I'd love to see a combination of using GPT 3.5 Turbo for places where it doesnt matter, with GPT4 just used for the important pieces.

Be nice to have it implement some kind of automated handling of the rate limit such as like https://help.openai.com/en/articles/5955604-how-can-i-solve-429-too-many-requests-errors or similar.

— Reply to this email directly, view it on GitHub https://github.com/Pythagora-io/gpt-pilot/issues/17#issuecomment-1702207886, or unsubscribe https://github.com/notifications/unsubscribe-auth/AYYOAZO2MCNNZRLG2FGFKITXYF2NVANCNFSM6AAAAAA4B3LMPQ . You are receiving this because you commented.Message ID: @.***>

CyKiller commented 1 year ago

Be nice to have it implement some kind of automated handling of the rate limit such as like https://help.openai.com/en/articles/5955604-how-can-i-solve-429-too-many-requests-errors or similar.

We can test this I guess - we likely need to update the llm_connection.py file to include an exponential backoff mechanism similar to the one described in the OpenAI article. We can try adding a while loop in def stream_gpt_completion to wrap the existing API request code. If a "429: Too Many Requests" error is encountered, we can have the code wait for a sleep time we set and then retry the request. Any advice here would be helpful as I haven't thought it out thoroughly but we can have the sleep time double with each retry, up to a maximum of retries to keep it going in what feels like uninterrupted on our end for now.

This article explains - Note: we will not increase limits on gpt-4, text-davinci-003, gpt-3.5-turbo-16k, or fine-tuned models at this time. .

nalbion commented 1 year ago

This is fixed now at https://github.com/Pythagora-io/gpt-pilot/blob/main/pilot/utils/llm_connection.py#L152

I do like @CyKiller's suggestion of exponential back-off. Currently it follows the instructions in the response which is always "Please try again in 6ms"

@Zate also suggests using "GPT 3.5 Turbo for places where it doesnt matter" which is also a good idea.