facebookresearch / pytorch3d

PyTorch3D is FAIR's library of reusable components for deep learning with 3D data
https://pytorch3d.org/
Other
8.81k stars 1.32k forks source link

RuntimeError: "knn_kernel_cuda" not implemented for 'Half' #1545

Closed kaustabpal closed 1 year ago

kaustabpal commented 1 year ago

I am getting this error when I am using the chamfer distance loss with 16bit mixed precision. Can someone kindly tell me why is this happening? Does chamfer distance not support 16bit mixed precision training?

bottler commented 1 year ago

No, PyTorch3D's pointcloud operators are designed for float32. It may be possible for users to adapt our kernels to support data in other formats.

kaustabpal commented 1 year ago

I am new to this and very confused. I think I have to make the change somewhere in the knn.cu but I am not exactly sure what change I need to make. I will be grateful if you kindly point me to some good resource that can give me an idea on what to do.

bottler commented 1 year ago

I can't provide a good source really. You can see https://pytorch.org/tutorials/advanced/cpp_extension.html and I often find it helpful to look at the pytorch sources e.g. for the definitions of the things it describes.

In this case, can you try replacing every occurrence of AT_DISPATCH_FLOATING_TYPES in that knn.cu file with AT_DISPATCH_CASE_FLOATING_TYPES_AND_HALF?