Closed KernAlan closed 5 months ago
Thanks for the PR! :)
One request: Can you add back the compute_type
parameter when running on the CPU? That way, someone can set it to float32
to avoid the warning that comes up (more info: https://opennmt.net/CTranslate2/quantization.html). Thanks!
Done!
Issue
https://github.com/savbell/whisper-writer/issues/29
Solution
Include a check using torch if CUDA is available. If it's not, gracefully degrade to cpu.
Testing
Config json set to
auto
. But the torch cuda check fails, and the model gracefully degrades to cpu: