Open MichelBahl opened 7 months ago
Depending on your hardware (GPU cores / ANE cores), Core ML might or might not be faster:
https://github.com/ggerganov/whisper.cpp/discussions/1722#discussioncomment-8011884
Also, try to generate ANE-optimized Core ML models - this can result in extra improvement:
I think Core ML is setup correct:
Start whisper.cpp with:
./main --language de -t 10 -m models/ggml-medium.bin -f
Runtime (COREML):
Runtime (normal):
Did I miss something for an faster transcription?