sqrtM / nesemu

tiny nes emulator, for fun
0 stars 0 forks source link

working web workers within wasm #4

Open sqrtM opened 9 months ago

sqrtM commented 9 months ago

native works bc of thread:::spawn. This needs to be genericised to also work for web workers. Investigate how to do this. To start, here's a chatGPT answer that might get you started on the right path::

--

Translating thread::spawn calls to web workers in Rust when targeting both wasm and binary can be a bit tricky but is definitely possible. Web workers allow you to perform concurrent tasks in a web application, similar to threads in native applications. Here's a general approach to translate thread::spawn calls to web workers:

  1. Create a Trait for Thread Abstraction (Optional): You can create a custom trait that abstracts over threads and web workers. This makes it easier to switch between the two based on the compilation target. For example:

    pub trait Worker {
        fn spawn<F>(&self, func: F) where F: FnOnce() + Send + 'static;
    }
  2. Implement the Trait for Native Threads:

    #[cfg(not(target_arch = "wasm32"))]
    pub struct NativeWorker;
    
    #[cfg(not(target_arch = "wasm32"))]
    impl Worker for NativeWorker {
        fn spawn<F>(&self, func: F)
        where
            F: FnOnce() + Send + 'static,
        {
            std::thread::spawn(func);
        }
    }
  3. Implement the Trait for Web Workers (wasm): For wasm, you'll need to use the web_sys crate to work with web workers. You can implement the Worker trait for web workers as follows:

    #[cfg(target_arch = "wasm32")]
    pub struct WebWorker;
    
    #[cfg(target_arch = "wasm32")]
    impl Worker for WebWorker {
        fn spawn<F>(&self, func: F)
        where
            F: FnOnce() + Send + 'static,
        {
            let func = Closure::once_into_js(func);
            let worker = web_sys::Worker::new().unwrap();
            worker.post_message_with_transfer(&func, &JsValue::NULL).unwrap();
        }
    }

    Note that this example assumes you have the web_sys crate and wasm-bindgen set up for your wasm target.

  4. Usage: In your code, instead of directly calling thread::spawn, you would use the Worker trait like this:

    let worker: Box<dyn Worker> = if cfg!(target_arch = "wasm32") {
        Box::new(WebWorker)
    } else {
        Box::new(NativeWorker)
    };
    
    worker.spawn(|| {
        // Your threaded logic here
    });

By using this approach, you can abstract away the differences between native threads and web workers, allowing you to keep the same general logic for both compilation targets. Make sure to adjust the code according to your specific project structure and requirements.

sqrtM commented 9 months ago

Most recent commit is working on this. Taking a very different approach because, unsuprisingly, the chatGPT answer was basically a deadend.

I ended up going with an approach sort of like the following (generated code):

use wasm_bindgen::prelude::*;
use wasm_bindgen::JsCast;
use web_sys::Worker;

#[wasm_bindgen(start)]
pub fn start() -> Result<(), JsValue> {
    // Create a new worker
    let worker = Worker::new("worker.js")?;

    // This closure will be called when the worker sends a message
    let onmessage_callback = Closure::wrap(Box::new(move |event: web_sys::MessageEvent| {
        // Handle the message from the worker
        let data = event.data();
        let data_str = data.as_string().unwrap();
        console_log!("Received message from worker: {}", data_str);

        // Update the GUI...
    }) as Box<dyn FnMut(_)>);

    // Set the `onmessage` handler of the worker to our callback
    worker.set_onmessage(Some(onmessage_callback.as_ref().unchecked_ref()));

    // Don't drop the callback
    onmessage_callback.forget();

    Ok(())
}
// This is worker.js

// Send a message to the main thread every 16ms (60fps)
setInterval(() => {
    postMessage("update");
}, 16);

For now it actually works quite well. The web worker basically acts as a sort of trampoline function, so it's not REALLY "multithreaded" in the traditional sense, and for the moment, the performance is very, very bad, but it is working quite well for the moment.

Leaving the issue open for future performance improvements.