Closed ezzcodeezzlife closed 1 year ago
Just tested with another smaller repo, still no luck 😢
Hi
Can you check your rate limits under https://platform.openai.com/account/rate-limits and check the values for gpt-3.5-turbo and gpt-4?
We'll add handling for OpenAI rate limits soon, and probably fallback to gpt-3.5-turbo in case of small rate limit or unavailability of gpt-4
@okotek
Could be that 40K is too small when the diff is moderate. I merged the PR, can try again and see if the retry policy helps?
unfortunately still the same issue. What TPM do you have and how did you get it? Any more ideas? @okotek
Did you also check your token usage at https://platform.openai.com/account/usage? I'm curious if an unexpectedly high number of tokens or a burst of requests was used.
No real spike visible in the usage dash, but thanks for the info! @KalleV
Nice, that's good to know. What if you click through the language model usage metrics? On my page, I saw a few show up with quite a few tokens like this one:
gpt-4-0613, 1 request
7,052 prompt + 218 completion = 7,270 tokens
https://github.com/Codium-ai/pr-agent/pull/117 fallback models implementation
Got the same error. Seems the error is from github actions and not OpenAI API. I could be wrong.
Traceback (most recent call last):
File "/app/pr_agent/servers/github_action_runner.py", line 57, in <module>
asyncio.run(run_action())
File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/app/pr_agent/servers/github_action_runner.py", line 53, in run_action
await PRAgent().handle_request(pr_url, body)
File "/app/pr_agent/agent/pr_agent.py", line 25, in handle_request
await PRDescription(pr_url).describe()
File "/app/pr_agent/tools/pr_description.py", line 40, in describe
await retry_with_fallback_models(self._prepare_prediction)
File "/app/pr_agent/algo/pr_processing.py", line [20](https://github.com/glip-gg/btx-game/actions/runs/5642014830/job/15281052282#step:3:21)8, in retry_with_fallback_models
return await f(model)
File "/app/pr_agent/tools/pr_description.py", line 55, in _prepare_prediction
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
File "/app/pr_agent/algo/pr_processing.py", line 43, in get_pr_diff
diff_files = list(git_provider.get_diff_files())
File "/app/pr_agent/git_providers/github_provider.py", line 84, in get_diff_files
for file in files:
File "/usr/local/lib/python3.10/site-packages/github/PaginatedList.py", line 69, in __iter__
newElements = self._grow()
File "/usr/local/lib/python3.10/site-packages/github/PaginatedList.py", line 80, in _grow
newElements = self._fetchNextPage()
File "/usr/local/lib/python3.10/site-packages/github/PaginatedList.py", line [21](https://github.com/glip-gg/btx-game/actions/runs/5642014830/job/15281052282#step:3:22)3, in _fetchNextPage
headers, data = self.__requester.requestJsonAndCheck(
File "/usr/local/lib/python3.10/site-packages/github/Requester.py", line 442, in requestJsonAndCheck
return self.__check(
File "/usr/local/lib/python3.10/site-packages/github/Requester.py", line 487, in __check
raise self.__createException(status, responseHeaders, data)
github.GithubException.RateLimitExceededException: 403 {"message": "API rate limit exceeded for installation ID [28](https://github.com/glip-gg/btx-game/actions/runs/5642014830/job/15281052282#step:3:29)441098.", "documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting"}
some started working for me, but commenting commands still leads to 403
@okotek @KalleV any more ideas around this? 🤔 thank you
Hello,
it looks like now it is not openAI related, but GitHub token limitation
When using GITHUB_TOKEN, the rate limit is 1,000 requests per hour per repository.
If you exceed the rate limit, the response will have a 403 status and the x-ratelimit-remaining header will be 0:
please see https://docs.github.com/en/rest/overview/resources-in-the-rest-api?apiVersion=2022-11-28#rate-limits-for-requests-from-github-actions and https://docs.github.com/en/rest/overview/resources-in-the-rest-api?apiVersion=2022-11-28#rate-limits-for-requests-from-github-actions
as a fastest response I would suggest adding an exception to make message rather then fail
in pr_processing.py
from github import GithubException
try:
diff_files = list(git_provider.get_diff_files())
except GithubException.RateLimitExceededException as e:
logging.error('Rate limit exceeded for GitHub API.')
If you want, i can take this. for this small patch and would love work on more robust solution to find way to overcome this problem
Best regards Ilya
I think I managed to quickly replicate the error from the OP right in the https://platform.openai.com/playground while testing PR responses for a PR equivalent to ~4800 tokens (GPT4):
Rate limit reached for 10KTPM-200RPM in organization org-<id> on tokens per min. Limit: 10,000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.
It is fixed for me. Thank you for all your contributions ❤️
Hey, i get the following error when runnung PR-Agent as Github Action. I follow the installation steps.
During the "PR Agent action step" I get the following error. Important to note that there is only one open PR at the time. I also checked to run API calls with the same OpenAI key and it works with no problems.
Sorry for the big stacktrace but maybe it helps :
Please let me know what is the issue here, thanks. Love the project!