rust-windowing / winit

Window handling library in pure Rust
https://docs.rs/winit/
Apache License 2.0
4.86k stars 909 forks source link

Integration with async ecosystem #1199

Open dylanede opened 5 years ago

dylanede commented 5 years ago

At the moment, I believe it is unclear how to best integrate futures and asynchronous execution in an application with a winit event loop. Ideally, I would imagine that futures/streams could be created for events (e.g. a stream for window resize events), and some special executor would run on the current thread that internally calls EventLoop::run. I am not sure however how to integrate this with, for example, tokio. Perhaps this is all something that could be built in a separate crate on top of winit, though I think this is the best place to discuss it.

The main immediate benefit I see of futur-ising the API is allowing state machines involving user input to be written much more naturally using async functions.

goddessfreya commented 5 years ago

I think @Osspial been working on something async related.

Osspial commented 5 years ago

I don't think Winit async/await support should live in the main repository, since it can be cleanly implemented on top of Winit's existing event loop API, it isn't necessary in all cases, and there's much more room for creating an opinionated API. However, I'd be alright with rust-windowing endorsing an official async wrapper for Winit.

Also, I have worked on an async wrapper, although it doesn't work on master quite yet. You can check it out here. https://github.com/osspial/winit-async

ryanisaacg commented 5 years ago

One fairly significant papercut I've run into when working on my own async event loop wrapper is that it's very difficult to manage the lifetimes required to create a window during the execution of the Future.

alvinhochun commented 4 years ago

This is only a rough idea as I am not too familiar with the async ecosystem: The web backend currently uses an exception as a hack to never return from EventLoop::run, but I've read that wasm-bindgen supports Future, which means it might be possible to make an EventLoop::async_run which doesn't use an exception, but instead returns a Future that completes when the loop has ended. It might also be possible to make an async-based polling API.

I'm not sure how this plays with the existing event loop model and on other platforms.

FredrikNoren commented 3 years ago

Here's a small example of how to get winit+wgpu+tokio working: https://gist.github.com/FredrikNoren/7c3535b11e99e8fcd8dd3d55f9a934a2

exellian commented 2 years ago

I noticed something which is impossible to do with the current design of the event loop: Let's assume you would want an async runtime but single threaded that runs on the same thread as the event loop (low overhead). Now you want to create a new window in this async context. Because we as a developer don't have control over the loop which is behind event_loop we would have to do something like this:

let runtime = tokio::runtime::Builder::new_current_thread()
        .enable_all()
        .build()
        .unwrap();
event_loop.run(move |event, target, cf| {

      runtime.block_on(async {
            let l = target; // Lifetime error can't borrow target
            //build window with target 
      });
});

The problem is that we cannot borrow the EventLoopTarget because it doesn't satisfy the 'static lifetime requirements of the tokio block_on method. So in a single threaded context this problem is not solvable because we can't communicate with the outer loop and get a response. If we use a multi threaded tokio runtime we could solve the problem by using channels (see: e.g: mpsc::channel) but this doesn't work on a single thread. In addition this example doesn't execute the async blocks concurrently on a single thread but only in a sequential way, which is not ideal. Therefore it is necessary to be able to control the loop of the EventLoop. With the old approach by polling and waiting events manually (See design change https://github.com/rust-windowing/winit/issues/459) this problem would be solvable, but with the current design the best we can do to integrate async is to use at least two threads, which is limiting. ps: I could also think of a EventLoop::async_run as proposed above, which itself is a async call and takes an async closure.

maroider commented 2 years ago

Your example is trivially fixed by adding a move, as is done in most of winit's examples.

- event_loop.run(|event, target, cf| {
+ event_loop.run(move |event, target, cf| {

I also can't see a 'static bound on Runtime::block_on's signature: pub fn block_on<F: Future>(&self, future: F) -> F::Output.

exellian commented 2 years ago

Your example is trivially fixed by adding a move, as is done in most of winit's examples.

- event_loop.run(|event, target, cf| {
+ event_loop.run(move |event, target, cf| {

I also can't see a 'static bound on Runtime::block_on's signature: pub fn block_on<F: Future>(&self, future: F) -> F::Output.

Ok thanks you are right Runtime::block_on actually doesn't require 'static lifetime so the borrowing problem is obsolete. But still the problem is that Runtime::block_on blocks the thread and therefore no events in the meantime will be registered. So i don't see that there is a truly async single threaded event_loop possible with the current design. Only if you would use more than one thread and use channels?

ps: I could think of polling the futures manually in the event loop, but this leads again to the problem how a future would have a reference to the EventLoopTarget:

// No way that this future has a reference to the EventLoopTarget
let test = async {
    //create window here
};
event_loop.run(|event, target, cf| {
    // Own schedular that polls futures
    test.poll(...)
});
Liamolucko commented 2 years ago

I could think of polling the futures manually in the event loop, but this leads again to the problem how a future would have a reference to the EventLoopTarget:

// No way that this future has a reference to the EventLoopTarget
let test = async {
    //create window here
};
event_loop.run(|event, target, cf| {
    // Own schedular that polls futures
    test.poll(...)
});

This would ideally be possible by passing an async function, and then calling it with target on the first call of the event handler; however, that doesn't work, because target only lives for as long as that call of the event loop handler.

I thought I could solve it by changing EventLoop::run to look like this:

pub fn run<F>(self, event_handler: F) -> !
where
    F: 'static + FnMut(Event<'_, T>, &'static EventLoopWindowTarget<T>, &mut ControlFlow);

That's not valid, though, because of the possibility of the event handler panicking, and the panic being caught by catch_unwind; in that case, the program could continue after the EventLoop was dropped, invalidating the 'static lifetime.

Even if EventLoop::run were to catch panics and abort instead of unwinding, it still wouldn't work on the web; the event loop doesn't close the web page when it's destroyed, and user code can keep running through event listeners and callbacks and such. That could be changed, but feels quite hacky (albeit not much worse than throwing an exception from EventLoop::run).

rib commented 2 years ago

just thinking out loud...

I would imagine that it could help if winit event loop backends were based on the mio reactor, including exposing a standard API for registering event sources (e.g. based on file descriptors) that can trigger custom even loop wakeups.

I think mio is the API that tokio uses for blocking on IO (as a wrapper around epoll/kqueue) and I could imagine there'd be some way to join the dots by getting a tokio runtime to defer to the mio reactor of the winit event loop whenever the runtime is idle waiting for new input before polling.

This would act like a single threaded tokio runtime. (I think it'd make sense in this context to only support running tasks in the main loop thread, since practically speaking there are lots of platform-specific quirks that mean large swathes of the winit API are only usable from the UI thread). Apps could always spawn their own multi-threaded runtimes in a thread they create if they have other tasks that don't interact with the UI.

That's just based on my understanding of how a typical unix main loop works, e.g. things like libuv and glib's mainloop which tend to provide an extensible API for adding sources, which winit doesn't current seem to have an equivalent for, even though the event loops still boil down to the exact same kind of block on poll() design.

From quickly poking at tokio I don't see an obvious escape hatch for accessing the reactor state from the public API though so maybe it's not really possible. Maybe the opposite would be possible - to add all winit event sources to tokio's reactor. Certainly the X and Android backends are based on thin epoll wrappers and it'd perhaps be feasible to move all the file descriptors over to tokio. The Wayland backend is built on a slightly more elaborate polling abstraction but it should also be easy to pull the file descriptors out of that.

For iOS/mac it looks like they have a CFRunLoop abstraction over kqueue with apis for accessing the main/current thread loop where it might be possible to create a run loop source that's based on the file descriptors of an mio reactor (hard to imagine that be possible without a funky custom branch of mio).

Otherwise maybe there'd be some trick for getting a tokio runtime to poll in one thread but execute tasks in another so there would be a kind of side-car thread for blocking on tokio IO events and whenever it wakes up it would somehow wake up the event loop thread and then actually execute the tokio runtime tasks on that thread. I guess that kind of model would be required with Windows too.

Every mainloop library I've looked at has always had to do weird stuff for Windows, so I guess that would be a pita to handle too :)

DemiMarie commented 2 years ago

I was not aware that Android used epoll! I was worried it was based on binder or similar. The web backend can (and must) just use the web’s own async APIs, so that leaves Windows as having to use a helper thread.

kchibisov commented 2 years ago

mio doesn't have level triggering, and it's something that should be used for Wayland at least. You could have multiple clients reading and so on.

I'm not sure there's a proper way to implement everything with edge triggering.

If anything, adding support for Windows into calloop is something more possible, given that it has level triggering and you can write custom event sources for it... Though right now it's more fd specific.

rib commented 2 years ago

I was not aware that Android used epoll! I was worried it was based on binder or similar. The web backend can (and must) just use the web’s own async APIs, so that leaves Windows as having to use a helper thread.

ah yeah, oops I forgot about web - yeah that'll be a fun one to consider. web is fundamentally event loop based at the browser level though, so hopefully there's even an elegant solution possible but no idea atm how tokio handles web support.

Yeah, Android being Linux based uses epoll but their NDK provides a wrapper called a 'looper' that includes some higher level functionality for adding sources with callbacks. In layers like ndk-glue or recent glue layers I've been experimenting with then we currently use this looper abstraction. I've not tried it but I think we could probably actually forgo that abstraction for our needs because we are in full control of creating the looper and know exactly what file descriptors we're adding too. Since we don't actually use any of the callback functionality then we can probably just use epoll directly.

The only concern would be if your android application used libraries/crates that were ported to Android based on the assumption that they can query the thread looper and add their own file descriptors. This kind of thing isn't really established for Rust development though and I think it would also be fair to say for this scenario if you want to integrate with the Winit event loop then you should be adding custom sources via a winit API and not just punching through to a platform-specific API like ALooper_forThread

rib commented 2 years ago

mio doesn't have level triggering, and it's something that should be used for Wayland at least. You could have multiple clients reading and so on.

I'm not sure there's a proper way to implement everything with edge triggering.

not exactly sure off the top of my head, but I wouldn't have expected level triggering was necessary. When we were bootstrapping Wayland support in the Gnome desktop that was all done based on the Glib mainloop which doesn't expose level triggering, since I'm not sure that's supported by other OSs.

Would be curious to see where level triggering is depended on currently.

rib commented 2 years ago

oh, wait, mio doesn't have level triggering WAT? Sorry I didn't really take in what you said, and assumed the opposite :) That's surprising. I would have guessed edge triggering was less portable but suppose not.

kchibisov commented 2 years ago

@rib yeah, I'm not sure I've seen anyone doing edge triggering on Wayland to poll anything. I think the libwayland is level triggering as well.

DemiMarie commented 2 years ago

oh, wait, mio doesn't have level triggering WAT? Sorry I didn't really take in what you said, and assumed the opposite :) That's surprising. I would have guessed edge triggering was less portable but suppose not.

kqueue is edge triggered only I believe

kchibisov commented 2 years ago

kqueue is edge triggered only I believe

It's not. https://github.com/Smithay/calloop/blob/0d3b13a34bf351858b4cf745a9f510c80f9ecd90/src/sys/kqueue.rs#L16.

rib commented 2 years ago

@rib yeah, I'm not sure I've seen anyone doing edge triggering on Wayland to poll anything. I think the libwayland is level triggering as well.

libwayland itself doesn't do the polling, so it should be more a question of whether it's possible to guarantee that it exhaustively reads all pending data after an edge POLLIN event. (they do have a simple event-loop abstraction but it's just for servers). wl_display_dispatch() does also call poll() while flushing output and to effectively double-checks the POLLIN status before reading which will check the level state - maybe this is what you're referring too - but that shouldn't really have any bearing on how the event loop itself works.

It's been years since I've been working with libwayland closely so I was poking through it yesterday. I forgot about the somewhat complex multi-threaded queuing system :/

Overall though it looks like all the queuing logic is handled on top of a wl_connection abstraction that reads as much as it can into a circular buffer each time wl_display_read_events() is called (typically viawl_display_dispatch()). At the wl_connection level if the circular buffer ever runs out of space then it returns an EOVERFLOW error and then at the wl_display level that gets treated as a fatal display error (so that's not something that needs to be handled in general). So then, apart from that fatal error situation, it looks like libwayland is already reading as much as possible in wl_connection_read() - so at least at first glance my current impression is that it might actually be compatible with edge triggering as is?

kchibisov commented 2 years ago

I think the issue is when you try to do something like dispatch_pending from multiple threads or something is reading in a blocking way the connection under the hood, and in a particular case the winit won't wake up?

e.g. what mesa is doing for something like vsync, so it wakes up winit. I'm just afraid that we won't wake up anymore when mesa is trying to read its events.

rib commented 2 years ago

I don't think the egl blocking should cause a problem, in part because that should happen on the main thread anyway - the main thing that's funky with mesa/egl is that it depends on the queue mechanism to ensure it doesn't lose events that the applications cares about.

When egl blocks to synchronize with the compositor then it will leave all the other things it doesn't care about queued up and then by the time you get back to winit then before blocking to poll, wl_display_dispatch_pending needs to be called to actually handle whatever stuff might have got queued as a side effect so there shouldn't be any outstanding work/events before blocking again. (mesa calls wl_display_dispatch_queue() for a private queue in a loop to handle it's blocking, which is comparable to wl_display_dispatch() which just calls wl_display_dispatch_queue() for the main queue).

There's a ref-counting protocol that allows reads to happen across multiple threads in case there are multiple consumers for extensions (similar to the egl situation) where each thread first has to register an interest in reading (wl_display_prepare_read_queue() increments a ref-count) before actually reading (wl_display_read_events() decrements the refcount and the winner that actually decrements to zero will be responsible for reading). That protocol ensures that only one thread will end up being responsible for the read and the other threads will wait on a pthread condition. No matter which thread ends up doing the read though the end result should be the same in that wl_connection_read() should read as much as possible (after which we shouldn't be reliant on a level trigger to ensure we wake up again). Whichever thread does the read will then also split everything that's read into its appropriate queue (this is how egl/mesa gets what it needs) and will issue a wake up for all the other threads that are currently blocked waiting on the pthread condition variable. From that point on it shouldn't really matter who handled the IO read, since it's now just a question of draining the events that have been queued up.

LukasBombach commented 2 years ago

From what I can understand in the macOS implementation, winit is already receiving each event from the OS (this is how I read the maybe_dispatch_device_event function) and uses that to implement the sync event loop api. Would it be feasable to provide an abstraction to the users from that that allows integration with an async library of their choice? Be it mio, tokio, or whatever else will be there in the future?

notgull commented 1 year ago

I was thinking that an ideal solution to this problem would probably look like having this function on EventLoop:

impl EventLoop<_> {
    pub fn block_on(
        self,
        event_handler: impl Fn(/* event handler arguments */),
        future: impl Future<Output = i32>,
    )
}

When block_on is called, it runs the event loop, but with a twist. Instead of parking (e.g. calling GetMessage()), it polls the future to advance it. If both the future is Pending and the event queue is empty, the event loop then parks and waits for either new events to be delivered or for the future to wake it up.

This would work out of the box with smol and async-std. With tokio, you would still need to enter. You may also need another thread to be blocking in order for the reactor to work, I'm not sure of the details.

As for the backend, there are many ways to simultaneously wait on events and a waker.

DemiMarie commented 1 year ago

@notgull: That will not work on web, where blocking operations of any form are disallowed. The best one can do is return a Future.

notgull commented 1 year ago

I've created the winit-block-on crate. While I still think that having async capability in winit is still the best option, this would be the "next best thing".

dhardy commented 1 year ago

FYI I naively implemented async-over-winit here:

This approach probably doesn't scale so well to large numbers of futures and it would be nicer if winit directly provided a Waker (or at least a Sync proxy), and requires futures be 'static. Otherwise it seems to work (only tested on X11 and Wayland).

Rodrigodd commented 1 year ago

I faced this issue recently, so I implement a this simple async executor that uses winit event loop as a driver. I don't have much experience with async so I don't know if this approach have any major flaw.

From a library perspective, this requires some amount of coupling with the application, because the user need to create a new user event, and handle it in the event loop, Oh, and you need a way to let the user pass this user event to the executor, in a generic way, I don't implement that here.

use std::{
    collections::HashMap, future::Future,  pin::Pin, sync::Mutex,
    task::{Context, Poll}, time::Duration,
};

use winit::{
    event_loop::{ControlFlow, EventLoopBuilder, EventLoopProxy},
    window::WindowBuilder, event::*, 
};

type TaskId = usize;

enum UserEvent {
    PollTask(TaskId),
    StartTask,
    Exit,
}

struct WinitExecutor {
    // a better implementation should use a vec of options here.
    tasks: HashMap<TaskId, Pin<Box<dyn Future<Output = ()>>>>,
    event_loop_proxy: EventLoopProxy<UserEvent>,
}
impl WinitExecutor {
    /// Create a new `WinitExecutor`, driven by the given event loop.
    pub fn new(event_loop_proxy: EventLoopProxy<UserEvent>) -> Self {
        Self {
            tasks: HashMap::new(),
            event_loop_proxy,
        }
    }

    fn next_task_id(&self) -> TaskId {
        static NEXT_TASK_ID: std::sync::atomic::AtomicUsize =
            std::sync::atomic::AtomicUsize::new(0);
        NEXT_TASK_ID.fetch_add(1, std::sync::atomic::Ordering::Relaxed)
    }

    /// Spawn a task.
    ///
    /// This immediately pools the task once, and then schedules it to be
    /// polled again if needed, using a `UserEvent::PollTask` event.
    pub fn spawn(&mut self, task: impl Future<Output = ()> + 'static) {
        let task = Box::pin(task);
        let task_id = self.next_task_id();
        self.tasks.insert(task_id, task);
        self.poll(task_id);
    }

    /// Poll a task.
    ///
    /// Should be called when the event loop receives a `UserEvent::PollTask`.
    pub fn poll(&mut self, task_id: TaskId) {
        // this wake only need to work once, I believe, so we could use some type of "oneshot box"
        // instead of a mutex?
        let winit_proxy = Mutex::new(self.event_loop_proxy.clone());
        let waker = waker_fn::waker_fn(move || {
            let _ = winit_proxy
                .lock()
                .unwrap()
                .send_event(UserEvent::PollTask(task_id));
        });
        let task = self.tasks.get_mut(&task_id).unwrap().as_mut();
        match task.poll(&mut Context::from_waker(&waker)) {
            Poll::Ready(()) => _ = self.tasks.remove(&task_id)
            Poll::Pending => {}
        }
    }
}

fn main() {
    println!("Hello, world!");
    let event_loop = EventLoopBuilder::<UserEvent>::with_user_event().build();
    let _window = WindowBuilder::new()
        .with_inner_size(winit::dpi::PhysicalSize::new(600, 480))
        .build(&event_loop)
        .unwrap();

    let mut tasks = WinitExecutor::new(event_loop.create_proxy());
    let event_loop_proxy = event_loop.create_proxy();

    let _ = event_loop_proxy.send_event(UserEvent::StartTask);

    event_loop.run(move |event, _, control_flow| match event {
        Event::UserEvent(UserEvent::StartTask) => {
            println!("starting task!");
            let event_loop_proxy = event_loop_proxy.clone();
            let task = async move {
                Wait(Duration::from_secs(1)).await;
                println!("waited one second!");
                Wait(Duration::from_secs(2)).await;
                println!("waited two seconds!");
                Wait(Duration::from_secs(3)).await;
                println!("exiting!");
                event_loop_proxy
                    .send_event(UserEvent::Exit)
                    .unwrap_or_else(|_| panic!("failed to send event"));
            };
            tasks.spawn(task);
        }
        Event::UserEvent(UserEvent::Exit) => *control_flow = ControlFlow::Exit,
        Event::UserEvent(UserEvent::PollTask(task_id)) => tasks.poll(task_id),
        _ => {}
    });
}

struct Wait(Duration);
impl Future for Wait {
    type Output = ();
    fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
        if self.0 == Duration::from_secs(0) {
            return Poll::Ready(());
        }
        let waker = cx.waker().clone();
        let duration = self.0;
        self.0 = Duration::from_secs(0);
        std::thread::spawn(move || {
            std::thread::sleep(duration);
            waker.wake();
        });
        Poll::Pending
    }
}
notgull commented 1 year ago

Consider checking this out: https://docs.rs/async-winit/latest/async_winit/

storycraft commented 1 year ago

I have made experimental async executor on top of winit eventloop to find a solution for this issue. Like the executor above, it supports spawning multiple concurrent tasks, async timer, immediate event handling.

https://github.com/storycraft/wm

The async event system is specially designed for current winit's event system and also cheaper than channels in most use case. Events are not buffered and listeners dispatch on target event phase immediately so user can react correctly on event phase.

Example program using wm


use instant::Duration;
use winit::event::WindowEvent;
use wm::timer::wait;

fn main() {
    wm::run(async_main());
}

async fn async_main() {
    // Wait for next resume event and create window
    let window = wm::resumed()
        .once(|_| Some(wm::create_window().unwrap()))
        .await;

    // spawn draw task
    wm::spawn_ui_task(async move {
        let _window = window;
        loop {
            wm::redraw_requested()
                .once(|_| {
                    println!("redrawing window");
                    Some(())
                })
                .await;

            println!("redraw done");
        }
    })
    .detach();

    // Spawn long task
    let task = wm::spawn_ui_task(async move {
        // Sleep for 5 secs
        wait(Duration::from_secs(5)).await;

        println!("task done");

        1 + 1
    });

    // Wait for close event
    wm::window()
        .once(|(_, event)| {
            if let WindowEvent::CloseRequested = event {
                Some(())
            } else {
                None
            }
        })
        .await;

    // Wait for long task to finish, show result and exit
    println!("task result: {}", task.await);
}