VIPL-SLP / pointlstm-gesture-recognition-pytorch

This repo holds the codes of paper: An Efficient PointLSTM for Point Clouds Based Gesture Recognition (CVPR 2020).
https://openaccess.thecvf.com/content_CVPR_2020/html/Min_An_Efficient_PointLSTM_for_Point_Clouds_Based_Gesture_Recognition_CVPR_2020_paper.html
Apache License 2.0
117 stars 19 forks source link

uvd- and xyz-coordinates #7

Closed alexbgl closed 3 years ago

alexbgl commented 3 years ago

Hi @Blueprintf,

thanks for sharing your great work.

While trying to understand your code, I noticed that you use the uvd-coordinates and not the xyz-coordinates as input for your model https://github.com/Blueprintf/pointlstm-gesture-recognition-pytorch/blob/4f65853aa6e57a30b4620830f021dfcf6ab442e5/experiments/models/motion.py#L42 Did I get that right? If yes, is there a specific motivation to do that?

Thanks

ycmin95 commented 3 years ago

Hi @alexbgl,

Yes, we adopt uvd-coordinates in our final version. We evaluate our methods on public datasets, which did not provide the camera intrinsic parameters, and try both coordinates in our experiments. We adopt the default parameters and as you can see from Figure 1 in our supplemental material, the transformed point clouds are a bit distorted. The transformed point clouds (xyz) also lead to worse performance (about 1~2 percent) compared to uvd-coordinates, so we directly adopted the uvd-coordinates. I believe adopting xyz-coordinates will be more robust if the accurate camera intrinsic parameters can be obtained.

Hope can help you~

alexbgl commented 3 years ago

Yes, that helps. Thank you for your quick reply.