I am not sure this question has been asked before. I searched the issues by keyword and could not find anything. I want to use this chamfer distance as the loss to train a network (more specifically, a pointnet-like autoencoder).
Currently, I am using it like this (based on the python version):
However, the reconstructed result does not look good. I also tried to define the loss as:
loss = torch.sum(dist1) + torch.sum(dist2)
Which had a better overall qualitative result but still not as I expected.
The problem should not be hard, and I am trying to learn a representation for a simple 2D/3D point cloud (composed of squares and circles ).
Example below (blue is the original, red is the decoder output):
I am not sure this question has been asked before. I searched the issues by keyword and could not find anything. I want to use this chamfer distance as the loss to train a network (more specifically, a pointnet-like autoencoder). Currently, I am using it like this (based on the python version):
Is this the correct way of using it?
However, the reconstructed result does not look good. I also tried to define the loss as:
Which had a better overall qualitative result but still not as I expected. The problem should not be hard, and I am trying to learn a representation for a simple 2D/3D point cloud (composed of squares and circles ).
Example below (blue is the original, red is the decoder output):