xinghaochen / awesome-hand-pose-estimation

Awesome work on hand pose estimation/tracking
https://xinghaochen.github.io/awesome-hand-pose-estimation/
3.08k stars 535 forks source link

about one paper for citation #43

Closed MengHao666 closed 3 years ago

MengHao666 commented 4 years ago

Hi , thanks for making such repo. I have one question here: Why do you mark "HOT-Net: Non-Autoregressive Transformer for 3D Hand-Object Pose Estimation. " as MM20 paper. I could not find the citation format -Bib Tex in Google Schoolar. Could u explain it ? Thanks a lot.

Janus-Shiau commented 4 years ago

Here is the main-track paper-list of MM'2020. You can find Hot-Net in the paper-list. https://2020.acmmm.org/main-track-list.html

MengHao666 commented 4 years ago

Here is the main-track paper-list of MM'2020. You can find Hot-Net in the paper-list. https://2020.acmmm.org/main-track-list.html

Hi ,thanks for fast reply. Do u know how to cite it in BibTeX format ? I can not find such thing.

Janus-Shiau commented 3 years ago

FYI.

@inproceedings{10.1145/3394171.3413555,
author = {Wu, Zhenyu and Hoang, Duc and Lin, Shih-Yao and Xie, Yusheng and Chen, Liangjian and Lin, Yen-Yu and Wang, Zhangyang and Fan, Wei},
title = {MM-Hand: 3D-Aware Multi-Modal Guided Hand Generation for 3D Hand Pose Synthesis},
year = {2020},
isbn = {9781450379885},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3394171.3413555},
doi = {10.1145/3394171.3413555},
abstract = {Estimating the 3D hand pose from a monocular RGB image is important but challenging. A solution is training on large-scale RGB hand images with accurate 3D hand keypoint annotations. However, it is too expensive in practice. Instead, we develop a learning-based approach to synthesize realistic, diverse, and 3D pose-preserving hand images under the guidance of 3D pose information. We propose a 3D-aware multi-modal guided hand generative network (MM-Hand), together with a novel geometry-based curriculum learning strategy. Our extensive experimental results demonstrate that the 3D-annotated images generated by MM-Hand qualitatively and quantitatively outperform existing options. Moreover, the augmented data can consistently improve the quantitative performance of the state-of-the-art 3D hand pose estimators on two benchmark datasets. The code will be available at https://github.com/ScottHoang/mm-hand.},
booktitle = {Proceedings of the 28th ACM International Conference on Multimedia},
pages = {2508–2516},
numpages = {9},
keywords = {curriculum learning, 3d hand-pose, multi-modal, conditional generative adversarial nets},
location = {Seattle, WA, USA},
series = {MM '20}
}