mlc-ai / web-llm

High-performance In-browser LLM Inference Engine
https://webllm.mlc.ai
Apache License 2.0
12.25k stars 774 forks source link

Whisper in web-llm with WebGPU? #68

Open sandorkonya opened 1 year ago

sandorkonya commented 1 year ago

Great Repository!

Is it within your scope to implement a webGPU accelerated version of Whisper?

Not sure if this helps, but there is a C port for Whisper wirh CPU implementation, and as mentioned in this discussion, the main thing that needs to be offloaded to the GPU is the GGML_OP_MUL_MAT operator.

thy

tqchen commented 1 year ago

great suggestion, yes this is something that we can push for

sandorkonya commented 1 year ago

@tqchen my ultimate goal would be to get it run the most efficient way on android edge device.

Although there is already a solution in the onnx framework onnx framework, based on the recent merge, but i am not sure when it will be usable on android.

There were some who tried with GPU delegates, but no success yet.

Any idea how one could solve it on the edge (Android) device?

DustinBrett commented 1 year ago

There is also a demo of Whisper running via WebAssembly in that repo. https://github.com/ggerganov/whisper.cpp/tree/master/examples/talk.wasm

sandorkonya commented 1 year ago

There is also a demo of Whisper running via WebAssembly in that repo. https://github.com/ggerganov/whisper.cpp/tree/master/examples/talk.wasm

Yes, it runs on CPU. I hope, that with a GPU version one could reach real time inference.