dahiyaaneesh / peclr

This is the pretraining code for PeCLR. An equivariant contrastive learning framework for 3D hand pose estimation. The paper is presented at ICCV 2021.
https://ait.ethz.ch/projects/2021/PeCLR/
MIT License
83 stars 14 forks source link

why only use RIGHT hand data? #11

Closed markson14 closed 2 years ago

markson14 commented 2 years ago

This is a fantastic work, thank you for your contribution. I have a question about why only use RIGHT hand data? I haven't found any clues from the paper. This is the code snippets I found in peclr/src/data_loader/youtube_loader.py image If I want to add another dataset, shall I flip them to RIGHT hand(or LEFT hand)?

spurra commented 2 years ago

I think the original motivation was that since FH only contains right hands, we only focused on right hand images. It just makes it a little easier for the model to learn. If I recall correctly from past experiments (not PeCLR), in practice this gives only a minor boost. If you want to add another dataset, the question of flipping will depend on if you want to support right and left hand keypoint prediction or only one.

markson14 commented 2 years ago

Thank you for your comment. That's a clear response.