Open OrianeN opened 1 year ago
Not all CUDA devices support mixed-type precision - myself I got this error when launching train.py:
train.py
RuntimeError: Current CUDA Device does not support bfloat16. Please switch dtype to float16.
Therefore I propose this PR to let the user choose which type of precision they want to use during training.
Not all CUDA devices support mixed-type precision - myself I got this error when launching
train.py
:Therefore I propose this PR to let the user choose which type of precision they want to use during training.