eli64s / readme-ai

README file generator, powered by AI.
https://eli64s.github.io/readme-ai/
MIT License
1.59k stars 167 forks source link

ERROR HTTPStatus Exception: Client error '429 Too Many Requests' for url 'https://api.openai.com/v1/chat/completions' #61

Open ataur39n-sharif opened 1 year ago

ataur39n-sharif commented 1 year ago

I am using this with Docker. Here I providing some error messages--

my command is -

docker run -it \ -e OPENAI_API_KEY=API_KEY \ -v "$(pwd)":/app zeroxeli/readme-ai:latest \ readmeai -o readme-ai.md -r https://github.com/ataur39n-sharif/book-catelog-backend

Error -

ERROR HTTPStatus Exception: Client error '429 Too Many Requests' for url 'https://api.openai.com/v1/chat/completions' For more information check: https://httpstatuses.com/429

images -

doc-command doc-0 doc-1 doc-2

doc-command

eli64s commented 1 year ago

Hi @ataur39n-sharif I just pulled the latest image and ran you're repo. Can you try once more with the latest and let me know if you still experience this?

Thanks!

Cro22 commented 1 year ago

Same Here! I tried with the latest image. image

image

eli64s commented 1 year ago

@Cro22 @ataur39n-sharif Are you using a free OpenAI account or payment method?

Cro22 commented 1 year ago

@eli64s I use the paid OpenAI Account

ataur39n-sharif commented 1 year ago

@eli64s I am using a free account of OpenAI

jatolentino commented 1 year ago

I'm also getting the same 429 error on my readme using [https://readmeai.streamlit.app/]

Aviksaikat commented 1 year ago

why not add rate limiting ?

eli64s commented 1 year ago

@Aviksaikat There is a default rate limit setting in the config file

This seems like a common issue for unpaid accounts, but still happens for paid accounts occasionally. I may need to work on a more robust API implementation to solve this problem for everyone.

jatolentino commented 1 year ago

@eli64s should we increase or decrease the rate_limit variable?

Aviksaikat commented 1 year ago

it should be decreased.

Aviksaikat commented 1 year ago

how can we update the config file ?

Aviksaikat commented 1 year ago

that rate_limit field it totally useless.

daan-ef2 commented 1 year ago

Got the same error and using a paid api-key. Is there any workaround?

abhi245y commented 11 months ago

I encountered the same issue and resolved it by switching the model to gpt-4-1106-preview. After forking the repository and reviewing the code, it appears that the issue stems from a limitation with the default gpt-4 model. The README also indicates that it uses gpt-4-1106-preview. I've implemented these changes in my local files and added a troubleshooting section to the README.

As a temporary fix, you can use the following command:

readmeai --output readme-ai.md --model gpt-4-1106-preview --repository https://github.com/eli64s/readme-ai.

However, this is my first time contributing to a project, and I'm not entirely sure about the proper procedures for contributing.

eli64s commented 11 months ago

@abhi245y that is correct, using the model engine gpt-4-1106-preview would be a temporary workaround. However, take note of the following before trying this.

[!WARNING]

During brief testing of the gpt-4-1106-preview model I've noticed higher API costs. If trying this workaround, use the OpenAI API Dashboard to continuously track your API usage and cost.

Thank you, Eli

abhi245y commented 11 months ago

Yah you are right, when I gave the script a few runs during testing, my usage shot up from 0.21$ to 1.4$ real quick. But if you're just using it once, it's no big deal.

I also noticed that you switched the model to gpt-4-1106-preview and then reverted it back to gpt-4. At first, I didn't understand why, but now it makes sense.

alexiuscrow commented 10 months ago

Had issues with —model gpt-4-1106-preview and —model gpt-3.5-turbo, but it works well with —model gpt-4.