Open lukaslihotzki opened 2 years ago
Threading is difficult in WASM, but AudioWorkletNode can also be used to play samples generated in the main thread:
use wasm_bindgen::{closure::Closure, JsCast, JsValue};
use wasm_bindgen_futures::JsFuture;
use web_sys::{AudioContext, AudioWorkletNode, MessageEvent};
// web_sys features: ["AudioWorklet", "AudioWorkletNode", "MessagePort", "MessageEvent"]
pub async fn init_sound() -> AudioContext {
let context = AudioContext::new().unwrap();
let worklet = context.audio_worklet().unwrap();
JsFuture::from(worklet.add_module("data:application/javascript,\
registerProcessor('buffered-stream-processor'%2Cclass%20BufferedStreamProcessor%20extends%20AudioWorkletProcessor%7B%0A\
constructor()%7B%0Asuper()%3B%0Athis.buf%3D%5B%5D%3B%0Athis.port.onmessage%3D(%7Bdata%7D)%3D%3Ethis.buf.push(...data)%3B%0A\
%7D%0Aprocess(inputs%2Coutputs%2Cparameters)%7B%0Aconst%20output%3Doutputs%5B0%5D%3B%0Aconst%20len%3Doutput%5B0%5D.length%3B%0A\
this.port.postMessage(len)%3B%0Aconst%20data%3Dthis.buf.splice(0%2Clen)%3B%0Aoutput.forEach(channel%3D%3Echannel.set(data))%3B%0A\
return%20true%3B%0A%7D%0A%7D)%3B").unwrap()).await.unwrap();
let node = AudioWorkletNode::new(&context, "buffered-stream-processor").unwrap();
let port = node.port().unwrap();
let mut initial = 512;
let mut t = std::num::Wrapping::<u8>(0);
let closure = Closure::wrap(Box::new(move |ev: MessageEvent| {
let len = ev.data().as_f64().unwrap() as u32 + std::mem::replace(&mut initial, 0);
let data = Float32Array::new(&JsValue::from_f64(len as f64));
for i in 0..len {
data.set_index(i, ((t.0 % 128) as f32 / 128. * 2. * PI).sin());
t += 1;
}
port.post_message(&data).unwrap();
}) as Box<dyn FnMut(_)>)
.into_js_value();
node.port()
.unwrap()
.set_onmessage(Some(closure.as_ref().unchecked_ref()));
node.connect_with_audio_node(&context.destination())
.unwrap();
context
}
In my application, this approach seems to perform better than the current CPAL webaudio backend (less stuttering than with CPAL, although the buffer is smaller). Is there a test for measuring this objectively? Would you generally be interested in using this approach upstream in CPAL?
https://github.com/rustwasm/wasm-bindgen/pull/3017 contains an example that runs a WASM thread in an AudioWorkletProcessor. CPAL could use that approach too.
Any updates on this? I'm working on performance for an emulator and the current web audio implementation is difficult to get working consistently without dropouts
I've been experimenting on this topic a little. Got my (non-cpal) audio worklet running with input and output in FF, chrome, safari.
I'm looking into implementing this in cpal, however I'm a little stuck. There's some async stuff to be done like adding the module. And I can't see where to do that without changing the cpal API, as I understood there's currently no way to bock on future in wasm in order to keep the build_output_stream_raw
sync. Any advice would be welcome.
Maybe add an async version alongside, and cfg out the synchronous version on wasm? That way applications which want to support web can deal with the complexity, and those that don't are unaffected.
Thanks, @Ralith
I made a naive version with atomics, which doesn't sound quite right yet.
Curious what you or someone else would say about this draft https://github.com/RustAudio/cpal/pull/826
Regarding the async part, I concluded to do that in js code, it looks sync to rust but it will keep a queue of things to call once the module is loaded.
Sadly I will conclude to give up on this topic. I ended here https://github.com/RustAudio/cpal/pull/826, if anyone finds it useful. I perhaps could learn some buffer patterns or use ringbuf traits for some custom storage using js_sys atomics.
But that would still send frames to play/record between the main and worklet via that shared memory. It is only interesting if it would run the callback on the worklet. But sending the callback isn't that straight forward, or even prevented by the browser. It appears that sending some wasm module is indeed the best approach, but it requires that such module is precompiled and bindings are generated to call it from js code, which is necessary because it is not possible to extend js classes from rust. Then it may appear easier for the user to repackage their DSP code separately and load just that in audio worklet, which would then require establishing some way of communication with main for any interactive audio.
Currently, the webaudio backend internally uses AudioBuffers chained together by AudioBufferSourceNodes' onended. These buffers need to be maintained regularly by the backend.
Web Audio has another type of node that closely resembles CPAL's API (translated to Javascript): AudioWorkletNodes regularly call the
process
callback in a separate thread to generate new samples.Maybe, CPAL could implement an AudioWorkletProcessor that receives a WASM module and a SharedArrayBuffer and uses these to create a new WASM thread that is called each time in the
process
callback. This solution would provide more consistent behavior across platforms and could maybe reduce overhead and latency.