I have M3 Max, and whisper-cpp-python doesn't seem to use the core ml feature.
If I use whisper-cpp-python and the medium model to transcribe an audio file that's 3 minutes 30 seconds long, it takes 76 seconds.
If I use whisper.cpp compiled with CoreML support and transcribe the same audio with the medium model, it takes 22 seconds.
If I use faster whisper to transcribe the same audio with the medium model, it takes 69 seconds.
How can I enable whisper-cpp-python to use core ml?
Thanks so much!
I have M3 Max, and whisper-cpp-python doesn't seem to use the core ml feature. If I use whisper-cpp-python and the medium model to transcribe an audio file that's 3 minutes 30 seconds long, it takes 76 seconds. If I use whisper.cpp compiled with CoreML support and transcribe the same audio with the medium model, it takes 22 seconds. If I use faster whisper to transcribe the same audio with the medium model, it takes 69 seconds. How can I enable whisper-cpp-python to use core ml? Thanks so much!