Closed shenef closed 2 years ago
I've changed the message because it can also fetches a couple of other unrelated stuffs such as game name, category, user, etc.
The message is currently "Slowing down".
I've set the slowdown to slowdown when it reaches 100 requests. The number can be adjusted and the slowdown is one minute.
So for users that needs 100 requests, you'd have to wait 1 additional minute. For users that needs 200 requests, you'd have to wait 2 additional minutes.
It's bothersome, but better than the code terminated!
Thank you for the report. Feel free to reopen this issue if you encounter this again so the parameters can be tweaked again.
I upped to 150.
seems to work fine now, took over an hour though (which is fine for that amount of runs)
Nordanix; 1262 PBs (57 days, 0:26:40); 2988 runs (58 days, 1:20:53); 8 systems; 38 Games
1 - 1262 PBs (57 days, 0:26:40)
2 - 2988 runs (58 days, 1:20:53)
3 - 1111 PBs with multiple runs
4 - 8 systems
5 - 38 Games
Which option? [1 - 5]
Considering that the quota is 100 per minutes, I think 150 is a good compromise. I'll try to reduce the timer.
Can you retry? My last commit reduced the sleep timer. Time should be the third of what it was with this change alone.
I think I also forgot to implement a way to reduce requests for level names. I'll do that eventually.
Unfortunately it doesn't look like that fixed it
Fetching ADVERSE - Village World - 100%
Fetching ADVERSE - Dockyard 1 - 100%
Fetching ADVERSE - Dockyard 2 - 100%
Traceback (most recent call last):
File "C:\...\SRC-statistics-master\main.py", line 7, in <module>
user = user(input("Who? "))
File "C:\...\SRC-statistics-master\user.py", line 21, in __init__
self.PBs = PBs(get_PBs(ID))
File "C:\...\SRC-statistics-master\PBs.py", line 11, in __init__
self.data.append(PB(pb))
File "C:\...\SRC-statistics-master\PBs.py", line 57, in __init__
tempo_leaderboard = leaderboard(self.IDs, self.game, self.category, (self.place, self.time), level=self.level)
File "C:\...\SRC-statistics-master\leaderboard.py", line 22, in __init__
infos = get_leaderboard_level(IDs)["data"]["runs"]
File "C:\...\SRC-statistics-master\api.py", line 28, in get_leaderboard_level
rep = requester(f"/leaderboards/{IDs[0]}/level/{IDs[2]}/{IDs[1]}" + varistr)
File "C:\...\SRC-statistics-master\api.py", line 169, in requester
raise BaseException(f"Please report this, {rep.status_code} - {URL}{link}\n{rep.json()['message']}")
BaseException: Please report this, 420 - https://www.speedrun.com/api/v1/leaderboards/o6gl9kod/level/495zgz29/7kj87rgd
You have reached the maximum allowed number of requests per minute. Calm down, buddy.```
I got an idea to improve that for the longer requests. Can you tell me how long it takes before you get this error, @shenef ?
The time until it errors seems to vary, here is the full log of an attempt that failed pretty quickly, only 1.5 minutes: https://haste.thevillage.chat/epewujopik
Oh okay, it is faster than I thought.
But anyway, I just made a massive change to the code. So I need this to be tested with the latest commit in order to know what is the current situation.
Oh okay, it is faster than I thought.
It varies by a lot, maybe based on the current API load. 1.5 Minutes is really fast though, i have had it working for 30+ minutes before.
Anyway, here are two tries of requesting nordanix's data using the latest code:
Edit: i should test this again since there came more commits after i downloaded the version i tested with
I do not expect this to change that much. But wanted to know if there was some change concerning the time since I reordered one thing that may have accelerate the number of requests per seconds.
The printings on the terminal feels faster somehow even if my changes shouldn't have influenced the raw amount of requests.
Currently the program won't clode if too many requests are made. But I should make some tweaks to avoid touching that rate limit.
My new way of doing requests have reduced the requests considerably. I still want to do some caching in order to abolish requests if reusing the script.
But for now, the number of requests should be much lower than before. And if it does hit the limit, it will hold for a couple of seconds before resuming.
Using Enhancement label because it won't terminates, thus it's not a bug. However it needs some rework.
Closing, since I don't think should be an issue anymore.
When requesting large numbers of runs, slow down requests instead of hitting the API limit and then terminating the program.
Currently the program has to be rerun manually until all runs are fetched.After running it about 10 times, re-runs don't seem to fix that problem.A message could be --- Fetched x runs, slowing down requests to prevent hitting API limits ---