python-trio / trio

Trio – a friendly Python library for async concurrency and I/O
https://trio.readthedocs.io
Other
6.11k stars 333 forks source link

Using trio with Qt's event loop? #399

Closed nicoddemus closed 4 years ago

nicoddemus commented 6 years ago

Hi,

Awesome library, I've been studying it and the care with its design and implementation is something to behold, congratulations to @njsmith and the team!

At work we use Qt to develop desktop applications and I was wondering if it is possible to use Qt's event loop instead of the internal trio event loop, something like quamash does for asyncio.

njsmith commented 6 years ago

Hello!

Awesome library, I've been studying it and the care with its design and implementation is something to behold, congratulations to @njsmith and the team!

Thanks! And sorry for the slow response...

At work we use Qt to develop desktop applications and I was wondering if it is possible to use Qt's event loop instead of the internal trio event loop, something like quamash does for asyncio.

Yeah, Trio+GUI is an important question! Also... kind of complicated. Right now there isn't anything implemented. Obviously this should change. I have very little experience writing GUI apps, so I'm not sure I know enough to recognize a good solution when I see it. So it's tricky.

Well, if you look around the issue tracker, you'll see that Trio follows a thinking-out-loud-driven development methodology, and we don't have a ticket yet for GUI stuff, so let's start with a brain dump of some things I've been thinking about :-).

First, if you just want to get something working, the shortest path to do that will be to run Qt in one thread and Trio in another. It's not too hard to build some infrastructure to call between them, similar to what trio.BlockingTrioPortal does for calling between Trio and a thread running synchronous code – I can give you some tips if you like. This isn't exactly ideal, basically because it involves threads and threads are never ideal. In particular, if you have actual logic running simultaneously in the Qt thread and the Trio thread, then you have to take all the usual care about them stepping on each other toes, which is too bad since one of the major advantages of cooperative scheduling is that it makes it easier to reason about interleavings.

That said, people do manage to write threaded programs all the time, and this probably works great for lots of cases, especially if you do things like push all the complex state manipulation onto the Trio side and restrict the Qt side to just dealing with the GUI and dispatching events to the Trio side. I'll also CC @matham here, who's been doing something similar with Kivy instead of Qt, and might be able to offer some comments. If you just want to get something working, stop here, this is definitely the path of least resistance.


Okay, that's an expeditious hack solution. What's a real solution look like? I'm... not sure. Popular libraries like Qt are built entirely around callbacks, while Trio's whole design (and a lot of its benefits) come from religiously avoiding callbacks. So what would a "Trio-style" GUI API even look like? I'm intrigued by some of the things I see coming out of the JS world, like Elm and Redux, that are designed to build GUIs without callback spaghetti. But this is obviously a case where practicality beats purity – desktop GUI APIs currently don't work this way, and we're not going to redesign Qt anytime soon! This makes me a bit nervous about committing to a particular solution right now (e.g. by including some half-baked qt support directly inside the trio core), but we gotta do something. So let's think about a possible medium term solution: smushing Trio and Qt together into the same thread (so at least we avoid threading issues), while keeping their existing architectures/APIs.

There are some practical challenges here: currently Trio doesn't have the infrastructure to use a pluggable event loop; in fact it doesn't expose an "event loop" or "reactor" concept at all. Internally, of course, it does have an API for this to handle different OSes (see the IOManager classes in trio/_core/), so we could imagine having a Qt backend. But there are two reasons this API is internal: (1) I'm reluctant to commit to a particular stable form; that adds a big chunk of concrete right in the middle of some of our most complicated code, (2) it actually doesn't have a single fixed API that abstracts over different OSes, since that ends up exposing some messy least-common-denominator thing; instead each backend is allowed to expose the full power of the current OS. But this approach really works best when you know you only have a fixed set of backends you care about. You could imagine coming up with some way for a Qt backend to expose the particular primitives that Qt gives you, though we don't have that now (currently the set of exposed primitives in trio.hazmat are determined at import time based on the OS). That's doable. What gets really nasty though is that the abstraction layers on top have to know about all the different backends – so like implementing trio.subprocess is on my todo list (#4), and it's hard enough getting good subprocess support on the big 3 OSes, but now Qt's native subprocess primitives are totally different again, so I guess I'd need a fourth implementation of trio.subprocess? Ick.

[This is where I paused writing this for like 4 days, while mulling over an idea :-)]

OK, though... so the two things we've identified as wanting to do are: (1) run all the trio and qt user code in the same thread, to reduce race conditions, (2) keep trio's existing backend interfaces, and ideally implementations. So... is it possible to use a threading hack to move just trio's I/O backend into its own thread?

I haven't thought this through fully, but I think there are two basic approaches that might work:

Hmm.

matham commented 6 years ago

Yes, I implemented what you describe in the first part, and that was indeed the quickest and simplest approach. Lemme first explain how kivy does its eventloop, as I'm not super familiar with qt. Broadly, it looks like this:

def run():
    while True:
        sleep(1 / frame_rate)
        process_all_callbacks()
        read_input_providers_and_dispatch()
        update_gl_graphics()
run()

qt has something similar in proceesEvents or something, I believe.

I could see of two ways to run async code with kivy, which I had originally planned to implement using the asyncio lib. (1) have kivy as an asyncio backend by implementing the kivy eventloop as an AbstractEventLoop that maps tasks to callbacks. I think this is what you were thinking off. The cons of this is that I'm not sure I trust kivy to run async task scheduling because conceptually, scheduling and flipping between async tasks is very different than just executing all the callbacks in sequence and waiting until the next frame (even if you allow priority switching etc). What you're optimizing for is very different between these two situations.

The second approach is to have asyncio run kivy as just another task and leave asyncio squarely in charge of scheduling as follows:

async def run():
    while True:
        await sleep(1 / frame_rate)
        process_all_callbacks()
        read_input_providers_and_dispatch()
        update_gl_graphics()

loop = asyncio.get_event_loop()
loop.run_until_complete(run)
loop.close()

After seeing trio and nurseries I became convinced that nurseries is the way to go for async and that only the second option makes sense. I also think now that with current GUIs, the best thing to do is let the GUI callbacks deal only with GUI stuff and leave all the user stuff run separately as async tasks like in option (2). I.e. use the GUI callbacks as little as possible and instead use await etc. So how would this work, especially with threads?

I can see a few configurations, listed below as well as some example functions/classes that I use.

async def run_in_kivy_thread(fn, *args):
    # create kivy callbacks that will exec fn
    # suspend task and wait for kivy to notify us it executed
    # the fn and then we'll reschedule the task
    ...

class TrioTrioPortal:

    def __init__(self, token=None):
        self.token = token or current_trio_token()

    async def trio_run_in_other_trio_thread(fn, *args):
        # schedule a call in the other trio loop running on another thread
        # suspend task and wait until the task is done and return the result
        ...

The last approach is what I'm currently trying to see how feasible it is in a real application. You should be able to do something very similar with qt.

There's another convenience thing I use; I added a way to await in trio for an event (or property change) to occur in kivy in a thread safe manner. I.e. say there's a kivy on_release event for a button, you can do:

async for event in button.async_bind('on_release'):
    do_something()

This allows you to watch in trio for events and other things within the GUI without blocking. The only difficulty here is that it's not suitable for actual event handling that needs an immediate response, because trio will schedule the task, rather than executing immediately like a traditional event handler.

This approach requires pretty much no changes in trio.

njsmith commented 6 years ago

Ah, right, that's another option I forgot to mention: using Trio to implement a new Qt event loop backend. This is possible in theory, by implementing the QAbstractEventDispatcher interface. (That's basically how we're supporting asyncio in trio-asyncio.) But this would require a lot of messing with complicated Qt internals that are already very fast and mature and may make awkward assumptions, so I'm a bit dubious. (In particular, Qt allows for recursively re-entering the event loop, which Trio doesn't.) OTOH maybe it would work great! If someone wants to experiment with this, I will be fascinated to hear what you come up with :-).

Edit: here's an example: https://github.com/sjinks/qt_eventdispatcher_libuv

njsmith commented 6 years ago

@matham

you cannot create/inject a task into the trio thread from kivy thread.

Can you elaborate on what you mean here? Obviously at some level you can do this, that's how trio.BlockingTrioPortal and your TrioTrioPortal work, so I assume you mean something a little more nuanced...

matham commented 6 years ago
you cannot create/inject a task into the trio thread from kivy thread.

As you say, we literally can, but then where do you wait for it without blocking or abandoning it as it's being executed in the other thread. Even if you're executing short user code in the user thread while waiting in the kivy thread, you're still making the main thread unresponsive. But, if you also run a trio event loop in the kivy thread, then you can just do await execute_in_user_trio_thread() in it and the kivy thread is not blocked.

Of course you could also make the user thread schedule a callback in kivy when it's done rather than it waiting, but that's more annoying. I'm still not sure how much this will actually be useful though as there may not be much need to call into the user thread and execute user code there (or at least I hope so, because otherwise the user code will be called callback style, which is opposite of the goal here).

njsmith commented 6 years ago

Right, but that's just the inherent annoyingness of working with a callback API, right? :-)

If you have something short and synchronous that you want to do, the kind of thing that you'd normally just execute immediately in the Qt thread, except that it has to be done in the Trio thread instead... I think in this case it might actually OK to block the Qt thread while waiting for it. The thread switch adds some bookkeeping overhead, but it's still not going to block the Qt thread for much longer than it would take to execute it normally.

For long-running actions, the only way to do that in Qt/Kivy's native API is to schedule them and then get a callback when they're done – that's just a fundamental property of how Qt/Kivy work, not something that Trio created or can do anything about :-). It'd be nice if we had non-callback-based GUI libraries, but until then I think we just have to live with this.

And in lots of cases, you may be fine just scheduling something to run in the trio thread and then abandon it – for example, run_sync_soon_in_trio(q.put_nowait, value), where q is a trio.hazmat.UnboundedQueue. This is not the kind of API we like in Trio-land for all the obvious reasons, like, what if the call fails? But in normal operation UnboundedQueue.put_nowait never fails; and if something really abnormal goes wrong, and we lose the exception... well, that's the same way that everything works on the Qt side, so we're not really making things worse if there are a few operations in the Qt/Trio interface that have the limitations of Qt.

nicoddemus commented 6 years ago

Hi @njsmith and @matham,

Thanks a lot for the detailed discussion.

Full disclosure, I've never programmed with asyncio or trio besides some toy examples, so please excuse me if I'm saying something which is clearly wrong or impossible to accomplish. :grin:

I don't know the internal details and consequences of integrating a separate event loop in Trio, so I'm still digesting what has been said here.


The use case I had in mind was to allow the user to execute asynchronous code in response to user events, something like:

button.clicked.connect(download_images)

# ...

async def download_images():
    button.setEnabled(False)
    progress_bar = QProgressBar()
    for image_path in paths:
        await download_image(image_path)
        progress_bar.advance()
    button.setEnabled(True)

(I understand that's not possible from a technical stand-point, I'm just trying to demonstrate the general idea).

Without getting into the issue of actually integrating into the Qt event loop, I believe we can get the above to work with an approach similar to:

button.clicked.connect(download_images_sync)

def download_images_sync():
    trio.run(download_images)

# ...

async def download_images():
    button.setEnabled(False)
    progress_bar = QProgressBar()
    for image_path in paths:
        await download_image(image_path)
        progress_bar.advance()
    button.setEnabled(True)

Of course the problem with this is that download_images will block Qt's own event loop, leaving the GUI unresponsive.

QApplication.processEvents() can be called periodically to avoid blocking the Qt's event loop. What if we schedule two tasks to run at the same time, one doing the original work of downloading the images and another one which just calls QApplication.processEvents() periodically?

button.clicked.connect(download_images_sync)

def download_images_sync():
    trio.run(run_with_process_events)

async def run_with_process_events():
    async with trio.open_nursery() as nursery:
        event = trio.Event()
        nursery.start_soon(download_images_monitored, event)
        nursery.start_soon(process_events_tick, event)

async def process_events_tick(event):
    while not event.is_set():
        qApp.processEvents()
        trio.sleep(1 / 20)

async def download_images_monitored(event):
    await download_images()
    event.set()

# ...

async def download_images():
    button.setEnabled(False)
    progress_bar = QProgressBar()
    for image_path in paths:
        await download_image(image_path)
        progress_bar.advance()
    button.setEnabled(True)

This looks like it would run download_images asynchronously and would keep the GUI responsive. Of course this can eventually be refactored to something nicer to use, for example:

button.clicked.connect(async_binder(download_images))

async def download_images():
    button.setEnabled(False)
    progress_bar = QProgressBar()
    for image_path in paths:
        await download_image(image_path)
        progress_bar.advance()
    button.setEnabled(True)

Where async_binder would be responsible to create the intermediate management tasks.

But there's the question of what would happen if another user event (another button for example) also wants to execute its asynchronous handler while we are downloading images in download_images? What will happen if we call trio.run while we are already processing another task?


Back to how integrate this into the Qt event loop, what if trio exposed a way schedule tasks, and the user could drive task execution themselves? For example (using a fictional API):

task_manager = trio.get_task_manager()

task_manager.schedule(coroutine1)
task_manager.schedule(coroutine2)
# runs whatever task should execute next, until we find an "await" call
task_manager.advance_until_await()  

IOW, trio "event loop" would be something like:

while task_manager.has_running_tasks():
    task_manager.advance_until_await()  

If the above is possible, then integrating it into Qt's event loop would be simple:

# normal Qt application:
qApp.exec_loop()

# integrated with trio:
task_manager = trio.get_task_manager()
while some_condition_to_quit_app():
    task_manager.advance_until_await()
    qApp.processEvents()

I'm not sure how this would play with trio's internals (specially with I/O), I thought I would throw the idea here and see where it lands.


As I said I'm still digesting what has been said in this thread so far, so I will come back with more comments if I have more to contribute in the next few days.

nicoddemus commented 6 years ago

Hi folks,

I did a quick experiment with my ideas above in this repository: nicoddemus/trio-qt-sandbox.

The idea of running two tasks at the same type, one running a loop which calls QApplication.processEvents periodically while the other executes an asynchronous function works partially, because we can't have two trio.run calls happen simultaneously.

But the idea of starting a task which calls periodically QApplication.processEvents and user actions are mapped to asynchronous functions which are then scheduled to run in the same nursery as the task with the loop, this idea seems to work.

Here's the full code for the latter idea:

import trio
from PyQt5.QtWidgets import QApplication, QWidget, QHBoxLayout, QPushButton

async def loop(app, event):
    while True:
        app.processEvents()
        with trio.move_on_after(0.01):
            await event.wait()
        if event.is_set():
            return

_nursery = None

def get_nursery():
    return _nursery

def async_bind(fn):
    def wrapper():
        get_nursery().start_soon(fn)

    return wrapper

async def say_hello():
    print('Hello')

async def main():
    app = QApplication([])

    widget = QWidget()
    layout = QHBoxLayout(widget)
    hello = QPushButton('Hello', widget)
    exit = QPushButton('Exit')
    layout.addWidget(hello)
    layout.addWidget(exit)
    widget.show()

    event = trio.Event()

    def on_exit():
        print('on_exit')
        event.set()

    hello.clicked.connect(async_bind(say_hello))
    exit.clicked.connect(on_exit)

    async with trio.open_nursery() as nursery:
        global _nursery
        _nursery = nursery
        nursery.start_soon(loop, app, event)

if __name__ == '__main__':
    trio.run(main)

This idea works, but I don't like very much of that move_after wait inside the loop function, it feels this will make the event loop not as responsive as it should. I will see if there's a better solution in the Qt API.

Thoughts?

imrn commented 6 years ago

Your loop's resolution is 0.01 secs. What is the % load of the pyton instance imposed on the cpu core for this case? Can you provide some values for 0.001 0.0001 etc.?

imrn commented 6 years ago

Of course this question is for an idle application, just to get an idea for the fixed overhead of your solution.

nicoddemus commented 6 years ago

@imrn actually my CPU sits at 0% flat regardless of which value I use for move_on_ after. I tried 0.001, 0.0001 and even 0.0 and I don't see any CPU usage spikes. Strange, I expected at least some overhead...

imrn commented 6 years ago

Can you verify that your application is NOT blocked at somewhere else? For example you may add another task printing something every 0.2 secs. Is it operational?

nicoddemus commented 6 years ago

Scheduled a new task which prints to the console every 0.2 secs:

@@ -32,6 +31,13 @@ async def say_hello():
     print('Hello')

+async def tick_tack(event):
+    while not event.is_set():
+        with trio.move_on_after(0.2):
+            await event.wait()
+        print('Tick tack')
+
+
 async def main():
     app = QApplication([])

@@ -58,6 +64,7 @@ async def main():
         global _nursery
         _nursery = nursery
         nursery.start_soon(loop, app, event)
+        nursery.start_soon(tick_tack, event)

Works pretty well, I see the periodic output in the console, clicking in the hello button also prints "hello", and CPU usage is a flat 0% even after I change move_on_after in loop to 0.0001.

njsmith commented 6 years ago

Hi guys,

Psst, not everyone here is a guy :-). Can you edit your message to "Hi all" or "Hi folks" or something similar? Thanks!

Works pretty well, I see the periodic output in the console, clicking in the hello button also prints "hello", and CPU usage is a flat 0% even after I change move_on_after in loop to 0.0001.

Huh, that's pretty cool. The main downside to this approach is that it prevents the CPU from entering deeper sleep modes, so it wastes power. (If you're on Linux, you can run powertop and see it complain about all the wakeups the app is doing.) I also expected it would add a bunch of latency to the UI, but if it doesn't, then cool. And if your apps are running on desktops maybe the power cost isn't a big deal – depends on your particular users and your tradeoffs. This kind of busy-polling is kind of a hack, but if it solves your problem, well, sometimes a hack is the right solution...

Backing up: the fundamental challenge here is exactly how to get Qt's I/O and Trio's I/O to play nicely with each other. If you use a busy-polling approach, then you can just poll for both kinds of I/O on each cycle, and integration is easy – but you have the problem with wasting power. To solve that, you need the app to actually go to sleep until something happens to wake it up, which means calling some low-level OS routine like select or epoll or GetQueuedCompletionStatusEx. The way these all work is that they give the OS a list of events we're interested in, and tell it that it doesn't have to wake us up again until one of them happens. But obviously, a single thread can only call one of these functions at a time – we can't have Qt calling one of these functions, and have Trio separately calling one of these functions, and have everything running everything in a single thread. Something has to give. So there are a few options:

In the long run, my tentative guess is that either the "Qt on Trio" approach or the "use two threads for waiting for I/O, and one thread for executing code" approach are the ideal options. But the "polling on a regular basis" and "use one thread for Qt and one for Trio" approaches are both good practical options if you need something now.

njsmith commented 6 years ago

Oh, ugh, there's another complication in gluing these libraries together that I forgot about: Qt allows for re-entering the main loop, like by calling processEvents recursively inside processEvents. This is kind of a gross hack that IIRC is used in a few places where they decided doing things in a proper non-blocking way was just too annoying. Trio doesn't support this. I think this is a big problem for "Qt on Trio".

njsmith commented 6 years ago

Thinking a little more about the "two I/O threads, one execution thread" approach (which needs a better name): In addition to the two possible implementation strategies I described above (https://github.com/python-trio/trio/issues/399#issuecomment-359399151), there's a third one that might be worth considering: hook into the Qt I/O loop so that we know when it goes to sleep and when it wakes up, and when it does a zero-timeout poll. When it's asleep, and only when it's asleep, let the Trio I/O thread run.

I think this might let us keep the same implementation of IOManager that we have right now,

The Qt event loop has aboutToBlock and awake signals that we could hook into, but from a quick skim I think aboutToBlock isn't actually signalled when it does a zero-timeout wait. When an event loop has lots of work being scheduled to run immediately, it obviously doesn't want to go to sleep, but it does want to still check for I/O occasionally, to make sure that I/O bound tasks and CPU-bound tasks can both get a fair chance to run. When Qt does a zero-timeout poll like this, we want to do the same for Trio's I/O channels. So we need some way to hook into this, and I think Qt may not give us one directly. So we might have to subclass the usual QEventDispatcher so we can override processEvents to do this checking. That's a pretty mild kind of hook though.

smurfix commented 6 years ago

There now is one example of hacking an event loop implementation so that it runs Trio underneath, i.e. trio-asyncio. That was reasonably easy because there's an abstract base class and we can hook arbitrary methods to call into Trio instead of asyncio, thereby subverting the whole thing.

Hooking into Qt is … not that easy. The Qt event loop and its notifiers are strictly callback based. You simply can't teach trio to accept a "this socket is now readable" event with a callback unless you have a second thread. If you do that you can either let them run in parallel, or (as the "compatibility" version of trio-asyncio does) you use a blocking queue and run them in lockstep. The latter may cause interesting and possibly-unavoidable deadlocks (as I have noticed in trio-asyncio).

IMHO rather than spending effort hooking into a particular event loop like Qt's, I would implement a generic mechanism which uses a socketpair for signalling between the "foreign" and the trio threads, and a couple of methods on top that allow for running some code in the "other" thread.

mwchase commented 6 years ago

On the topic of GUI integration in general, I just now tried to get Trio to work with wxPython. The resulting code is at the toy stage, but the big thing I learned is that wx.App segfaults if it's not run from the main thread, at least on Mac. So that's a data point for implementation efforts: can't put wx.App in a worker thread.

I haven't really exercised anything too significant past that; I'm currently putting trio in a thread that creates a portal and sends it to the main thread through a standard Queue, then waits in a loop for a stop event to trigger. Between wx and trio, I'm not sure I'm using either in a way that makes sense; this is what happens when you try to learn two libraries at once, I guess.

To be honest, I don't have much hands-on experience with GUI programming, so this might go kind of slow on my end, but I like tinkering with things. (I'm also not sure whether hooking into trio via callbacks gains me anything in the end, but, again, tinkering.)

nicoddemus commented 6 years ago

@mwchase thanks for sharing! 👍

AFAIK Qt has the same limitation: QApplication's event loop must also happen in the main thread.

tacaswell commented 6 years ago

Qt allows for re-entering the main loop, like by calling processEvents recursively inside processEvents. This is kind of a gross hack that IIRC is used in a few places where they decided doing things in a proper non-blocking way was just too annoying.

You also need to rely on this if you want responsive Qt apps running inside of synchronous code running inside of IPython ;)

njsmith commented 6 years ago

I just poked a bit more at what would be involved in letting Trio's existing IO manager code "run under" a foreign event loop.

Apparently it's totally fine to call epoll_ctl from one thread while another thread is blocked in epoll_wait. From a post on LKML:

Suppose thread B calls epoll_wait and blocks before thread D calls epoll_ctl. Is it safe to do so? Will thread B be notified for the event submitted by thread D?

Using the interfaces this way is pretty much their entire point. They'd be almost useless if you couldn't use them in this way.

I know that the same is also true for IOCP, and I suspect (but have not verified) that it is true for kqueue.

Let's pretend for a moment that it's true for all of our primitives. (The Windows loop is actually much more complex right now, but as part of this exercise we're going to imagine we've switching Windows to using IOCP alone and gotten rid of select.)

Then we could split handle_io up, into two operations: one that just does the low-level blocking syscall, and returns an opaque object (e.g., the raw return value from the blocking syscall), and another that takes this opaque value and issues calls to reschedule etc. When running under a foreign event loop, we'd spawn a thread which does something like:

while True:    
    opaque_object = handle_io_top_half(timeout=infinite)
    foreign_loop.call_soon_threadsafe(handle_io_bottom_half, opaque_object)

Simultaneously, we'd also run a scheduler loop, as a task under the foreign event loop:

# Imagine this is unrolled into a callback-based state machine running under Qt or whatever
while True:
    await wait_runnable_tasks_queue_is_not_empty()
    batch = runnable_tasks_queue.popall()
    for task in batch:
        run task for one tick

This would require tweaking reschedule a bit so it can wake up the scheduler if necessary, but that's easy (at least in principle).

Now: if you call reschedule from inside the main thread, that should work fine. So operations like queue.put_nowait should work fine, either from Qt context or Trio context.

What if you do something that involves I/O, like wait_readable? Well, it mutates the IO manager's internal state, but that's all within the main thread, so it's OK. It also call epoll_ctl or similar to mutate the underlying OS's I/O watch state... but that's safe to do from any thread, according to our original assumption. And then when it reports back, that ends up calling reschedule in the main thread, which as noted should be fine.

Hmm... actually, as written above we have a small problem, because currently handle_io_bottom_half is responsible for resetting events that have happened (e.g. calling epoll_ctl to remove an fd from the watch set after the task that was waiting for it wakes up). So if we immediately call epoll_wait again, we'll report duplicate events. Two options: move the event resetting into handle_io_top_half, or else make sure that we don't schedule the next call to handle_io_top_half until the previous call to handle_io_bottom_half finishes. The latter is probably simpler all around.

So... that's a nice story. What are the gaps between this and reality?

njsmith commented 5 years ago

Checking in 6 months later to note that the more I think about it, the more it feels like the strategy of using a thread to let trio's existing IO backends coexist with foreign event loops is the right way to go. I don't have answers to all of those questions yet, but one major development since then is that in the discussion on #52 we figured out how to make Trio's Windows backend be IOCP-only, which is a precondition for implementing the trick here.

So if anyone wants to move this forward, implementing the stuff in #52 is probably the first step.

nosklo commented 4 years ago

It seems #52 is now closed, what's the next step?

njsmith commented 4 years ago

I just posted a first-draft PR at implementing this: #1551

It has a demo of what it looks like, using asyncio as a sample, but it should work the same with Qt etc.

As it turns out, in the simplest implementation I ended up with, it doesn't actually matter whether the IO backend is threadsafe; it would have worked fine with select too. So I guess #52 wasn't a blocker after all, whoops :-).