Closed howlagon closed 8 months ago
Thanks for the PR, @howlagon! In my experience, s3s doesn't crash when tokens expire & new ones need to be generated. I simply get output like this:
Are you experiencing something different?
I looked into it further, and the crash only occurs when using -M [n] -r
at the same time. This [appears to] happen because -r sets skipprefetch
to True, and -M ensures skipprefetch
stays as True if it's set before. Updating my fork to be more specific.
Before, this is what would happen when I ran it on my server for extended periods of time. After a while (ranging from a couple hours to a bit over a day), the code would crash like this (output from journalctl)
Thanks for that additional info! I'm traveling for the next few weeks and will look at this when I return later this month – sorry for the delay!
The reason skipprefetch
is getting set to True here if and only if which == "both"
is because otherwise check_if_missing()
(which then calls fetch_json()
) will call prefetch_checks()
twice (once for battles, once for SR jobs). So instead, we're calling it once manually beforehand and then skipping the following two automatic checks.
I don't know if your way would break that or not, but I don't have a ton of time to check right now and getting rid of those lines worries me a bit, haha. But I think this would also work: Simply put a try/except around the line that was throwing an error for you (line 1520 in your screenshot)
https://github.com/frozenpandaman/s3s/blob/da6829b7826a46c33530f48d5043a98484c56203/s3s.py#L1530
and if it throws the JSONDecodeError, call prefetch_checks()
before trying to re-run the line, so it'll get the cookies and then succeed, and continue. Edit: Actually, on second thought, I think this retry logic should go somewhere else... Edit 2: Nevermind, I think I was right the first time lol.
Thoughts?
Also, in this PR, unless I'm mistaken...
https://github.com/frozenpandaman/s3s/blob/da6829b7826a46c33530f48d5043a98484c56203/s3s.py#L2059
...is still getting called just a couple lines after. Even if skipprefetch
doesn't get set to True in the earlier part that you removed, check_old
(-r
) is still True, and it's an OR statement, so either way skipprefetch
will still get enabled and affect the following check_if_missing()
line... meaning I think this error would still occur even if this PR were merged, no?
To be fair, my method was pretty brutish so I get that
Would it be better to put the try/except (or a if query1.status_code != 200
) at line 254/255 in s3s.py? Just for added protection against really specific edge cases where tokens expire partially through gathering history. I do agree though that it should be added somewhere else. If only python had a retry function for try/except clauses.
Also, in this PR, unless I'm mistaken...
https://github.com/frozenpandaman/s3s/blob/da6829b7826a46c33530f48d5043a98484c56203/s3s.py#L2059
...is still getting called just a couple lines after. Even if
skipprefetch
doesn't get set to True in the earlier part that you removed,check_old
(-r
) is still True, and it's an OR statement, so either wayskipprefetch
will still get enabled and affect the followingcheck_if_missing()
line... meaning I think this error would still occur even if this PR were merged, no?
.... nooooo (lie) I must've forgotten that line was there. Whoops! You're entirely right about that.
In my mind, fetch_json()
shouldn't do any of the "is the cookie valid or not" checking – all it does is actually try to fetch it (or fail to do so, with errors handles elsewhere). That's what all the "# ! fetch from online
" comments are for, so I can quickly remember that fact for every instance where the function is called and deal with it appropriately if/when it fails.
As such, I think this commit should fix things: 48e3d5a
check_for_new_results()
will now properly be the place that ensures tokens are up-to-date. Or, well, it tries to run as-is (as otherwise this would be adding a new request every N
seconds which isn't ideal) but if it gets an error, then it gets updated tokens and re-tries (and that retry should never fail, unless maintenance has begun or something... but I think we should deal with that potential issue only if it ever actually comes up).
If you could try this out (will be live in v0.6.0 in a few minutes) and let me know if it seems to fix stuff, that'd be great!
Currently, s3s sometimes throws a
json.decoder.JSONDecodeError
whenever the tokens have expired while using both-M [n]
and-r
This should hopefully fix it, allowing for continuous usage with little (to no) restarts :D