Closed DsaltYfish closed 8 months ago
Hi, thanks for raising this up. Now this issue has been fixed with the 1.0.0.post2
release
After upgrading to version 1.0.0.post2, I encountered an issue. When performing backpropagation, I received an error message saying "expected scalar type Float but found BFloat16".
Meeting the same problem ...
+1. same problem.
I have been using PyTorch's native Automatic Mixed Precision (AMP) feature and specified the data type as bf16 (bfloat16). However, I encountered an error during the backward propagation step. I would like to know if this is due to a lack of framework support or if I made an error in my usage.