tomchang25 / whisper-auto-transcribe

Auto transcribe tool based on whisper
MIT License
205 stars 14 forks source link

Support M1/2 - With GPU / Neural Engine acceleration #21

Open 100tomer opened 1 year ago

100tomer commented 1 year ago

I don't know if it's even possible, but it would be cool and efficient to have it run on M1/2 with GPU or Neural Engine

tomchang25 commented 1 year ago

Hi @100tomer,

Currently, whisper kernel does not support the GPU acceleration library for Apple, which is far beyond my capabilities. However, I have learned that there is a project underway to address this issue.

As an alternative solution, I will release a light-weight version(#19) that will achieve satisfactory results by compressed audio and through OpenAI API integration. This should be completed in the near future and included in version 3.1.

100tomer commented 1 year ago

Hi @100tomer,

Currently, whisper kernel does not support the GPU acceleration library for Apple, which is far beyond my capabilities.

However, I have learned that there is a project underway to address this issue.

As an alternative solution, I will release a light-weight version(#19) that will achieve satisfactory results by compressed audio and through OpenAI API integration.

This should be completed in the near future and included in version 3.1.

Thanks, sounds great

Hans-han commented 1 year ago

it is actually not faster with GPU. there is a cpp version of Whisper which is also on GitHub. The author tested the GPU vs CPU and found it is close in terms of performence . his guess was the bandwidth is the limitation.

100tomer commented 1 year ago

it is actually not faster with GPU. there is a cpp version of Whisper which is also on GitHub. The author tested the GPU vs CPU and found it is close in terms of performence . his guess was the bandwidth is the limitation.

Oh that's the ram bandwidth? Or the SSD bandwidth? Also does it use GPU only or the Neural Engine too?