ggerganov / whisper.cpp

Port of OpenAI's Whisper model in C/C++
MIT License
33.29k stars 3.36k forks source link

NPU support in whisper.cpp #1557

Open bobqianic opened 7 months ago

bobqianic commented 7 months ago

Christmas is coming soon, and I want to take some time to research something interesting, such as edge low-power inference. Although current whisper.cpp can run on Raspberry Pi, the inference performance cannot achieve real-time transcription. Fortunately, there are now some development boards that use processors with NPUs, which can be used to achieve real-time transcription of large models. My primary goal is to first support RK3566 and RK3588.

Roadmap:

Reference:

https://github.com/rockchip-linux/rknpu2

ggerganov commented 7 months ago

Would be great if we can find a way to utilize the NPUs! Keep us in the loop!

Leeviber commented 7 months ago

I tried converting the whisper encode model to rknpu format(.rknn), it successed but the estimated runtime is quite slow, even lower than running on CPU. I think the NPU is not full support transformer, some operators are still running on the CPU.

RoboMagus commented 7 months ago

Some interesting development was done here: https://github.com/usefulsensors/useful-transformers.

However not everything runs on the NPU and I've personally had mixed success on running non English models.

bobqianic commented 7 months ago

Some interesting development was done here: https://github.com/usefulsensors/useful-transformers.

Yes, I've seen that. But I'm looking to enhance the ggml tensor library by adding some operators. This way, not only will whisper.cpp be able to utilize the NPU, but other ggml examples like llama.cpp as well. I've ordered an OrangePi 5 Plus with 32GiB RAM from Aliexpress, which is still in transit : )

However not everything runs on the NPU and I've personally had mixed success on running non English models.

Hopefully, we'll be able to run all models, regardless of their size, and whether they are English-only or support multiple languages.

bobqianic commented 7 months ago

The most challenging aspect I've encountered thus far is finding an appropriate driver for the RK3588 & RK3566 NPU. Most Linux distributions don't include an NPU driver, with this one being the notable exception.

https://github.com/unifreq/linux-5.10.y-rk35xx/tree/main/drivers/rknpu

bobqianic commented 7 months ago

I tried converting the whisper encode model to rknpu format(.rknn), it successed but the estimated runtime is quite slow, even lower than running on CPU. I think the NPU is not full support transformer, some operators are still running on the CPU.

You're right. From my experiments, it seems the NPU on the RK3588 is only effective for 3x3 convolutions. Unfortunately, its GEMM performance is quite poor. Despite being equipped with a 3x2 TOPs NPU, each unit only delivers about 10 GFLOPs for FP16 GEMM or 20 GFLOPs for INT8 GEMM. It's quite a letdown. I regret to share such disappointing news during the holiday.

image

bobqianic commented 7 months ago

I discovered that someone else did the exact same thing but didn't find success. @ggerganov

The challenge with the Rockchip NPU stems from its peculiar input and output dimensions. To attain maximum speed, it's necessary to transform a 2D matrix into a particular dimension. If you don't do this, the driver will take over, but it operates much slower. After processing, you need to convert the result back to its original dimension. This process is quite inefficient, and I'm sharing this to prevent others from spending unnecessary time trying to implement it.

With the RK3588, when you're working with a matrix A of size (N, K) and a matrix B of size (K, M), you'll need to reshape matrix A to the new dimensions of (K/8, N, 8). Similarly, reshape matrix B to (M/16, K/32, 16, 32). After these transformations, the resulting output matrix C will have the dimensions of (N/4, M, 4), instead of the expected (N, M).

Links: https://clehaxze.tw/gemlog/2023/12-17-update-on-ggml-rknpu2-backend-and-rknpu2-1_6_0.gmi https://github.com/marty1885/llama.cpp/tree/rknpu2-backend

Matrix A:

image

Matrix B:

image

Matrix C:

image

solarsamuel commented 6 months ago

@bobqianic this is a great idea. The question is how can we implement whisper.cpp on a NPU/TPU on an embedded device?

I have an OrangePi 5 and was hoping the NPU would provide benefits, but it looks like it won't be very useful. Thank you for looking into it.

I have one idea that may be theoretically possible, but would require a good amount of work and $$$. The idea is to use 4 Google Coral Edge TPU's in a pipeline (see pipeline example here https://coral.ai/examples/) and in essence jailbreak them (George Hotz is working on it in these videos: https://www.youtube.com/watch?v=rArv2NUXGU8) to run with models other than TensorFlow (for example whisper models). The Coral Edge TPU's would take up all of the USB slots on a Raspberry Pi (maybe a USB hub could be used too), so there would be a bandwidth constraint. Each TPU has up to 8 MB of SRAM to store the models, but in reality it's more like 6.5 MB each, so probably a maximum model size of 26 MB for 4 of these units. The quantized 4 bit tiny model comes in under this. The entire setup may be possible and run quickly, but the accuracy of the tiny model isn't that great.

Another idea would be to take TPU's or FPGA's and connect them to a Raspberry Pi via USB or as a Raspberry Pi hat. That will be bandwidth limited by the communication protocol (serial, I2C, etc...).

Maybe one day when chips like this come out things will be easier for embedded AI: https://www.arm.com/products/silicon-ip-cpu/ethos/ethos-u55

ggerganov commented 6 months ago

@bobqianic Thank you for the updates! The work in marty1885/llama.cpp@rknpu2-backend is interesting and I will be following with the progress

marty1885 commented 6 months ago

For reference. People have worked around the matrix reordering specifically for Whisper by abstracting the entire thing around that fact.

useful-transformers is a very successful implementation. https://github.com/usefulsensors/useful-transformers

Lhemamou commented 1 month ago

Hey :) as raspberry is launching a new TPU hat https://www.raspberrypi.com/products/ai-kit/ I am reopening the topic. Do you have by chance any news or ways to begin enhance performance thanks to this hat ? I guess it would be easier than coral as we don't need to jailbreak it.

marty1885 commented 1 month ago

@Lhemamou I actually talked to Halio about this during Computex. Long story short. No unless someone wants to form a company and sign an NDA to gain low level access.

solarsamuel commented 1 month ago

@marty1885 I have a company and I'd be open to signing a NDA as long as it looks reasonable, but before I go too far, my main concern is in regard to hardware.

Does anyone know what the hailo hardware limit is in regard to model size? Feel free to send links.

For example, the Google Coral TPU stick ASIC has 8MB of SRAM built into the chip. Something like 1.5MB of overhead is used, so a model can only be 6.5MB max. https://coral.ai/docs/edgetpu/compiler/#parameter-data-caching

For the Google Coral TPU the whisper tiny model is too big, even the 4 bit quantized version of the tiny model is around 24MB.

tiny | 75 MiB disk | ~273 MB Mem

I'm assuming the Hailo chip does the matrix multiply internally and the results are stored in a pipeline in internal SRAM, but I could be wrong.

marty1885 commented 1 month ago

@solarsamuel I can't tell without knowing NDAed information. From what I gathered from their sales. At least I think he is a sales.

  1. The Hailo 8 can fit YOLOv5s and a modified version of YOLOv5m (I assume quantized)
  2. If their compiler cannot fit the model onto the chip. They can split the model and switch the weights on the fly
    • But sure, there will be performance impact. Limited by PCIe bandwidth
    • Or split the model across multiple chips
  3. Hailo-10H has DRAM. You can put large models there. It eliminates PCIe transfer, now the bottleneck is DRAM bandwidth
  4. whisper.cpp requires low level access to the accelerator. It needs to be able to command the accelerator to do matmul directly. A compiler layer is useless in this case. If you want to sign an NDA, you need to check that you also get that level of access.
solarsamuel commented 1 month ago

@marty1885 I can reach out. Who would be a good person to contact? I'm definitely not making any guarantees any of this will work out.

marty1885 commented 1 month ago

@solarsamuel Sorry for the late reply. I got caught in some personal issues. Let's not misuse the issue tracker and talk through email? You can find mine on my website via the link on my GitHub profile.

Your GH profile links to a company and I'm not sure if that's the one you want to use for discussion.

I don't have an email in mind - I don't have a business card from them since the NDA was a big show stopper for me.