Closed kaustabpal closed 1 year ago
No, PyTorch3D's pointcloud operators are designed for float32. It may be possible for users to adapt our kernels to support data in other formats.
I am new to this and very confused. I think I have to make the change somewhere in the knn.cu but I am not exactly sure what change I need to make. I will be grateful if you kindly point me to some good resource that can give me an idea on what to do.
I can't provide a good source really. You can see https://pytorch.org/tutorials/advanced/cpp_extension.html and I often find it helpful to look at the pytorch sources e.g. for the definitions of the things it describes.
In this case, can you try replacing every occurrence of AT_DISPATCH_FLOATING_TYPES
in that knn.cu file with AT_DISPATCH_CASE_FLOATING_TYPES_AND_HALF
?
I am getting this error when I am using the chamfer distance loss with 16bit mixed precision. Can someone kindly tell me why is this happening? Does chamfer distance not support 16bit mixed precision training?