Closed Infernio closed 1 year ago
We used to have an automatic update check, but it was removed. It was back in the 2.7 days, and ran on multiprocesing
so it could run/download in the background without locking up the GUI. The issue was that ensuring the child process gets killed properly on shutdown is hard, especially with networking, because it can end up in a deadlock. Some users had intermittent issues where closing bash didn't actually kill the child process, then they couldn't relaunch due to the single instance check. They'd have to force kill any open python.exe
processes. EDIT: Also it was based on polling SourceForge, so yeah.
Background aside, there's been a lot of improvements in the cooperative multitasking area since then. Options:
asyncio
. Instead, I'd either use anyio's thin wrapper that exposes a trio
-like interface to it, or (IMO the better option) using trio itself (started by author of those articles). It's pretty actively developed and quite a few things they've developed have pushed asyncio
more towards their style, they were even the main motivators to adding the try-except*
syntax in Python 3.11.The trickiest part with using an async library is going to be getting it to play nice with wxPython. I'm not super familiar with asyncio
, but I know trio
lets you run itself on top of another event loop, using trio.low_level.start_guest_run
. I've played with that here, we could use just use it or roll our own (that code isn't up to date with the latest trio/wxPython release - but should be easy enough to update it).
If we're going sync, requests is much better than the standard library for interacting with web-pages / GET/POST APIs, or for async asks is almost the same thing. Then for parsing Beautiful Soup is amazing for webpage results, otherwise json is easy enough to handle with the standard library.
I've gotten quite a bit of work done locally, reusing some of the stuff I and Ganda built for nexus.py. The only remaining challenge would be trying the async stuff you mentioned. I'll push the WIP commit tomorrow.
See e4a79155f751c2b97e6ffa8abc6acc5727470c9a. Note that I implemented the background check using a boring threading.Thread
and wx.lib.newevent
. This needs no new dependencies, seems to work great and was very quick to write. But if you think trio or some other more sophisticated thing would be better, feel free to push a squash commit for review.
If all we're really doing is a a very small check (like it looks) that should be simple enough for now. I did have a few small comments though to ensure Bash shuts down like we expect when the internet is acting up:
timeout
argument to everything in requests
: timouts. If you don't the thread can be stuck in blocking I/O. This is OK as far as if we want the program to run (in that kind of blocking I/O, the GIL is released so Python can thread switch just find and the UI will still be responsive), but Python has issues sometimes killing blocked threads on shutdown. Meaning the python process will remain until that thread finally dies.And if you already had this pretty much done, I'd say go with it. No reason to rewrite just to use async. Overall looks good, like that you're using requests.
I have had an idea on the backburner to switch BAIN to using async, but BAIN is still kinda a mess and hard to modify unless you've been deep in it and know it's interactions well, so I've been putting that off for a long time.
The timeout is still the default from what Ganda specified, so yeah, we can adjust that. BAIN is still in bad shape and needs a bunch of refactoring before we can start tackling any large-scale improvements (FOMOD was painful).
Edit: see dcd34f9db0ddb7f3ec89da2477519e943b697d0a for timeout.
Since we want to move to a new Nexus page soon (see #640), we'll want to have an update check that can direct people to the right place to get new updates. That way we don't have to rely on Nexus' tracked pages feature alone. Of course there should be an option to turn it off as well.
I want this in for 311, so that we can consolidate the pages for 312.