anuraghazra / github-readme-stats

:zap: Dynamically generated stats for your github readmes
https://github-readme-stats.vercel.app
MIT License
68.17k stars 22.31k forks source link

[Down Time] Card throws 'maximum retries exceeded' error #1471

Open anuraghazra opened 2 years ago

anuraghazra commented 2 years ago

Hi everyone, I see that all personal access tokens are currently failing, thus you will see the "Maximum retries exceeded" error.

I'm looking into the issue to figure out what caused it.

Ideally, it should be fixed in an hour when the PATs get rest.

Details

Public instance information

Custom badge

Explanation


Hey everyone, sorry for the recent downtimes.

For the sake of transparency, we get a huge amount of requests per month (it's kind of mind boggling for a side project which I built just for fun). Here's the vercel dashboard statistics:

image

As you can see even with 73% of the responses being cached by vercel the amount of live requests are huge, and the bottleneck here is Personal Access Tokens

Each PAT have 5000 points, and based on the complexity of the GraphQL query & resource limitations set by GitHub, it's actually very very low considering that amount of requests.

To circumvent the issue, we currently have 7 PATs (thats 35k points) which sometimes get exhausted. Thanks to @rickstaa we are going to add few more probably bumping it up to 10 or 12, hopefully that could help alleviate the downtimes.

Note: That these downtimes are temporary, and only lasts for 1hour, since PATs regenerate after 1 hour


So whats the idea way?

https://github.com/anuraghazra/github-readme-stats/issues/1471#issuecomment-979306704 Deploy on your own vercel instance. WHY? It's easy to do, free & reliable.

Originally posted by @anuraghazra in https://github.com/anuraghazra/github-readme-stats/issues/2130#issuecomment-1270624232

FredHappyface commented 1 year ago

Hi thank you for this awesome project!

Just a thought: is it worth setting the default cache to a higher value? Such as 12 hours or so? I say this as for many users there's not going to be much difference in stats over this period and hopefully it further reduces strain on the PATs. Possibly keeping the 4 hours as a minimum so users can opt in to current cache times

Cheers 🙂

qwerty541 commented 1 year ago

Hi thank you for this awesome project!

Just a thought: is it worth setting the default cache to a higher value? Such as 12 hours or so? I say this as for many users there's not going to be much difference in stats over this period and hopefully it further reduces strain on the PATs. Possibly keeping the 4 hours as a minimum so users can opt in to current cache times

Cheers 🙂

Hi, @FredHappyface! I like your idea and think that increasing the minimum caching period on public instance is something we should do, thanks for sharing. If someone want his stats to be reloaded more oftern he will be able to deploy his own instance and set environment variable CACHE_SECOND.

@rickstaa What do you think about this? Are you up to approve pull request with such change if i implement it?

rickstaa commented 1 year ago

Hi thank you for this awesome project! Just a thought: is it worth setting the default cache to a higher value? Such as 12 hours or so? I say this as for many users there's not going to be much difference in stats over this period and hopefully it further reduces strain on the PATs. Possibly keeping the 4 hours as a minimum so users can opt in to current cache times Cheers 🙂

Hi, @FredHappyface! I like your idea and think that increasing the minimum caching period on public instance is something we should do, thanks for sharing. If someone want his stats to be reloaded more oftern he will be able to deploy his own instance and set environment variable CACHE_SECOND.

@rickstaa What do you think about this? Are you up to approve pull request with such change if i implement it?

I'm comfortable with extending the cache period to 12 hours since it will reduce the frequency of creating Personal Access Tokens (PATs) that need to be sent to @anuraghazra. However, one concern I have is that this change might lead to more users reporting issues about their GitHub cards not updating promptly.

Do you think the recent addition I made to the readme (https://github.com/anuraghazra/github-readme-stats/commit/3e66189c44fca0ff89ea975b7f71f84fbdf512ab) effectively communicates the use of our caching mechanism?

qwerty541 commented 12 months ago

Hi thank you for this awesome project! Just a thought: is it worth setting the default cache to a higher value? Such as 12 hours or so? I say this as for many users there's not going to be much difference in stats over this period and hopefully it further reduces strain on the PATs. Possibly keeping the 4 hours as a minimum so users can opt in to current cache times Cheers 🙂

Hi, @FredHappyface! I like your idea and think that increasing the minimum caching period on public instance is something we should do, thanks for sharing. If someone want his stats to be reloaded more oftern he will be able to deploy his own instance and set environment variable CACHE_SECOND. @rickstaa What do you think about this? Are you up to approve pull request with such change if i implement it?

I'm comfortable with extending the cache period to 12 hours since it will reduce the frequency of creating Personal Access Tokens (PATs) that need to be sent to @anuraghazra. However, one concern I have is that this change might lead to more users reporting issues about their GitHub cards not updating promptly.

Do you think the recent addition I made to the readme (3e66189) effectively communicates the use of our caching mechanism?

I have opened pull request https://github.com/anuraghazra/github-readme-stats/pull/3242 with extending default cache time to 8 hours for the beginning and little improvements to documentation with explanation of our caching mechanism. Please check it when you have free time.

devashish2024 commented 5 months ago

@anuraghazra I'd recommend you to simply do this: make multiple servers from the main server.

You can simply take the main server and host 3 different/other servers for same thing. Main server will go to the server with least-used time, so that it can give you 3x speed

qwerty541 commented 5 months ago

@anuraghazra I'd recommend you to simply do this: make multiple servers from the main server.

You can simply take the main server and host 3 different/other servers for same thing. Main server will go to the server with least-used time, so that it can give you 3x speed

Hey, @ashishagarwal2023! Thanks for your input. Looks like that you a bit misunderstood the problem. We do not experience thr problems with Vercel servers productivity or slow response time. Our problem is 5000 points limitation per GitHub API token, so when it hits the limit the server statrs showing the error. Currently our team trying to handle this problem by the following actions:

pwbriggs commented 4 months ago

shields.io ran into a similar issue a while back, and GitHub support recommended a clever solution at https://github.com/badges/shields/issues/529#issuecomment-228605811. You could consider a similar solution, though it might be difficult to implement (and the requests would still be running on your Vercel instance). Something worth considering, though 🤷