Open 0x1FFFFF opened 1 year ago
ye, i got this error too, it seems we need to wait update from pytorch
Hi, I believe PyTorch has support for most of the functions now. Plus it can be run with env arg PYTORCH_ENABLE_MPS_FALLBACK=1 for those functions that aren't supported yet to fall back on the CPU.
I'm running pyannote and other projects with PyTorch compiled with support for mps so this should also be do-able
@Harith163 That is correct. Pytorch has implemented "mps"(Metal Performance Shaders) for a at least a year now, Open-AI's Whisper supports "mps" as well but faster-whisper used by WhisperX apparently only supports "cpu" and "gpu".
I myself get "unsupported device mps", here is the error:
--> 128 self.model = ctranslate2.models.Whisper( 129 model_path, 130 device=device, 131 device_index=device_index, 132 compute_type=compute_type, 133 intra_threads=cpu_threads, 134 inter_threads=num_workers, 135 )
For me with Mac M1 "cpu" is extremely slow to the point I have not been able to get e proper transcription.
Any workaround to the issue? I believe this is essential for mac users.
Thanks :)
@Harith163 That is correct. Pytorch has implemented "mps"(Metal Performance Shaders) for a at least a year now, Open-AI's Whisper supports "mps" as well but faster-whisper used by WhisperX apparently only supports "cpu" and "gpu".
I myself get "unsupported device mps", here is the error:
--> 128 self.model = ctranslate2.models.Whisper( 129 model_path, 130 device=device, 131 device_index=device_index, 132 compute_type=compute_type, 133 intra_threads=cpu_threads, 134 inter_threads=num_workers, 135 )
For me with Mac M1 "cpu" is extremely slow to the point I have not been able to get e proper transcription.
Any workaround to the issue? I believe this is essential for mac users.
Thanks :)
Whisper.cpp is fast on Apple Silicon ("Plain C/C++ implementation without dependencies" … "optimized via ARM NEON, Accelerate framework, Metal and Core ML"). However, I believe it only supports very rudimentary diarization currently.
Ideally, WhisperX's solutions for diarization, etc, could be made to work in the fashion of Whisper.cpp.
@Harith163 That is correct. Pytorch has implemented "mps"(Metal Performance Shaders) for a at least a year now, Open-AI's Whisper supports "mps" as well but faster-whisper used by WhisperX apparently only supports "cpu" and "gpu". I myself get "unsupported device mps", here is the error:
--> 128 self.model = ctranslate2.models.Whisper( 129 model_path, 130 device=device, 131 device_index=device_index, 132 compute_type=compute_type, 133 intra_threads=cpu_threads, 134 inter_threads=num_workers, 135 )
For me with Mac M1 "cpu" is extremely slow to the point I have not been able to get e proper transcription. Any workaround to the issue? I believe this is essential for mac users. Thanks :)Whisper.cpp is fast on Apple Silicon ("Plain C/C++ implementation without dependencies" … "optimized via ARM NEON, Accelerate framework, Metal and Core ML"). However, I believe it only supports very rudimentary diarization currently.
Ideally, WhisperX's solutions for diarization, etc, could be made to work in the fashion of Whisper.cpp.
That's only ideally, whisper.cpp's creator showed interest in the killer features of WhisperX but stated they are not coming any time soon.
I'd rather fix whisperx to work better on M1/Apple Silicon
Just wanted to second this. I love whisperx on my PC, but on Mac it is just so slow.
Its resulted in fragmentation where if I want my script to be universal I have to look elsewhere. Really wish this could be supported.
Any progress on this so far?
If I pass in
mps
to device option it will crush. Would be wonderful if M1 GPU can be supported