At Tier 1, the rate limits are easy to hit w/ goose and then it crashes. Notably, the error returned from openai does include the amount of time needed to wait.
Traceback (most recent call last):
File "/.../.local/pipx/venvs/goose-ai/lib/python3.13/site-packages/exchange/providers/utils.py", line 30, in
raise_for_status
response.raise_for_status()
~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/..../pipx/venvs/goose-ai/lib/python3.13/site-packages/httpx/_models.py", line 763, in
raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '429 Too Many Requests' for url 'https://api.openai.com/v1/chat/completions'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
Error from openai:
httpx.HTTPStatusError: Client error '429 Too Many Requests' for url 'https://api.openai.com/v1/chat/completions'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
{
"error": {
"message": "Rate limit reached for gpt-4o in organization org-rwoOAee8QQEqEQbe0mBRJ7sM on tokens per min (TPM): Limit
30000, Used 28578, Requested 4439. Please try again in 6.034s. Visit https://platform.openai.com/account/rate-limits to learn
more.",
"type": "tokens",
"param": null,
"code": "rate_limit_exceeded"
}
}
At Tier 1, the rate limits are easy to hit w/ goose and then it crashes. Notably, the error returned from openai does include the amount of time needed to wait.
Error from openai: