Closed alexbgl closed 3 years ago
Hi @alexbgl,
Yes, we adopt uvd-coordinates in our final version. We evaluate our methods on public datasets, which did not provide the camera intrinsic parameters, and try both coordinates in our experiments. We adopt the default parameters and as you can see from Figure 1 in our supplemental material, the transformed point clouds are a bit distorted. The transformed point clouds (xyz) also lead to worse performance (about 1~2 percent) compared to uvd-coordinates, so we directly adopted the uvd-coordinates. I believe adopting xyz-coordinates will be more robust if the accurate camera intrinsic parameters can be obtained.
Hope can help you~
Yes, that helps. Thank you for your quick reply.
Hi @Blueprintf,
thanks for sharing your great work.
While trying to understand your code, I noticed that you use the uvd-coordinates and not the xyz-coordinates as input for your model https://github.com/Blueprintf/pointlstm-gesture-recognition-pytorch/blob/4f65853aa6e57a30b4620830f021dfcf6ab442e5/experiments/models/motion.py#L42 Did I get that right? If yes, is there a specific motivation to do that?
Thanks