Hi!, thanks for your good project to the lidar motion. But I have a question about the data augmention. I notice that the "input_points" are augmented with global flip and scaling, but the "source_points" and "target_points" do not do this augmetation. Maybe the predicted motion can not transform "source_points" to "target_points" to compute precise chamfer loss. Is this a bug?
Thanks for pointing it out. In the original implementation, all the points are concatenated together and sliced afterward. We are still verifying the preprocessing part after refactorizing.
Hi!, thanks for your good project to the lidar motion. But I have a question about the data augmention. I notice that the "input_points" are augmented with global flip and scaling, but the "source_points" and "target_points" do not do this augmetation. Maybe the predicted motion can not transform "source_points" to "target_points" to compute precise chamfer loss. Is this a bug?