w3c / machine-learning-workshop

Site of W3C Workshop on Web & Machine Learning
https://www.w3.org/2020/06/machine-learning-workshop/
42 stars 12 forks source link

Noise suppression with DSP+DNN, WebNN and Web Audio API feature gaps #100

Open anssiko opened 3 years ago

anssiko commented 3 years ago

The RNNoise, Neural Speech Enhancement, and the Browser talk by @jmvalin -- which btw. has a superb audio quality in its recording :) -- explains the complexity of RNNoise (for a 48 kHz mono input signal) is around 40 megaflops, with the following top 3:

– DNN (matrix-vector multiply): 17.5 MFLOPS – FFT/IFFT: 7.5 MFLOPS – Pitch search (convolution): 10 MFLOPS

@jmvalin concludes:

So, if we wanna optimize RNNoise, then these are the things we need to look at.

The WebNN API recently added the Gated Recurrent Unit (GRU) and corresponding operators https://github.com/webmachinelearning/webnn/pull/83 to fill the operator gaps to enable hardware acceleration of models that make use of GRUs, such as RNNoise.

In earlier related discussions @jmvalin noted:

Honestly what I've like to see at some point is a WebBLAS (plus FFT and convolution/correlation). That would probably cover most use cases -- including a big chunk of WebML.

The WebNN API also recently added the general matrix multiplication (gemm) of the Basic Linear Algebra Subprograms (BLAS), specifically its Level 3.

Couple of questions or discussion points in the context of the workshop:

I suspect @teropa might have perspectives and input to this discussion, so looping him in.

@padenot for the Web Audio API expertise.

@huningxin for feedback on noise suppression hardware perspectives.

wchao1115 commented 3 years ago

Hybrid DSP/DNN models like the RNNoise highlights a situation at the hardware support layer in that DSP-based processors tend to live independently and work separately from its GPU counterpart within the same system. This is probably due to historical reasons, so while DSP tends to be more specially focused on signal processing and media, the GPU is more general-purpose and exposes a more programmable pipeline, which makes it more suitable for the ever-evolving ML world. From the performance side, while there are GPU-based implementations of FFT, it's unclear if it could be more effective than a built-in DSP hardware doing comparable work when it's available within the same platform. But splitting the workload by targeting both the DSP and GPU could also pose an interoperability challenges around data transfer.