justanhduc / neuralnet-pytorch

A high level framework for general purpose neural networks in Pytorch.
https://neuralnet-pytorch.readthedocs.io
MIT License
27 stars 6 forks source link

Batches of varying size for Chamfer Loss #5

Open ahariri13 opened 4 years ago

ahariri13 commented 4 years ago

For both Chamfer loss and EMD loss I have batches of points clouds of different size. As it's not practical to use a batch size of 1, is it correct to stack all batches (say 400 sets of point clouds) in the form (n,k) and use them in the Chamfer/EMD loss ? Otherwise I have to pad my point clouds to a common length.

justanhduc commented 4 years ago

hello @ahariri13. for this loss, yes, you have to pad it to common size in order to use it. These days I am also working with point cloud of various resolutions, and like you, I stack everything into a big one. But unfortunately I am not working with Chamfer distance these days, so I haven't converted this to the big point cloud format yet. If you are familiar with CUDA kernel, a pr is more than welcome. Else if you just want to get the job done, I suggest using KNN from torch-cluster, and calculate the distances, but of course it will not be as fast as having a kernel.

justanhduc commented 4 years ago

Hi @ahariri13, not sure if you are still interested but I have written a kernel for Chamfer loss working with stacked point clouds instead. The code can be found here.