ChanglongJiangGit / A2J-Transformer

[CVPR 2023] Code for paper 'A2J-Transformer: Anchor-to-Joint Transformer Network for 3D Interacting Hand Pose Estimation from a Single RGB Image'
Apache License 2.0
93 stars 8 forks source link

Performance in Hand-Object Interaction Tasks #27

Open Bokai-Ji opened 3 months ago

Bokai-Ji commented 3 months ago

Hi @ChanglongJiangGit ,

Thank you for your excellent work!

I have a couple of questions regarding the model's performance and application. First, how well does the model perform in hand-object interaction scenarios? Additionally, could you provide some guidance on setting up the pipeline for inference on custom datasets?

I appreciate any insights you can share.

ChanglongJiangGit commented 1 month ago

Thanks for your attention!

First, A2J-Transformer can be applied to hand-object interaction datasets like HO-3D. To write the dataloader, you can just follow Keypoint-Transformer(CVPR'22). Second, to write the dataloader on custom datasets, please just give the model "input_img", and A2J-Transformer can output the 2.5D coordinates of the image. Thus the 2D and 3D(relative to the root joint) coordinates can be visualized.

Hope this will help you!