syguan96 / DynaBOA

[T-PAMI 2022] Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation
224 stars 19 forks source link

Question about 2d annotation during testing in test datasets #4

Closed MooreManor closed 2 years ago

MooreManor commented 2 years ago

Hello! Thanks for your great work! I have some confusion about the 2d annotation during online adaption. Do you mean that you use the 2d ground truth key points of the test set? Or do you obtain the 2d key points as annotation after sending the frames into an off-the-shelf 2d pose estimator?

syguan96 commented 2 years ago

For internet videos, we use the detected 2D keypoints by AlphaPose. For benchmarks, we use the annotation contained in themselves, following previous works such as ISO \etc.

MooreManor commented 2 years ago

Thanks for your detailed answer!

MooreManor commented 2 years ago

Sorry to bother you again. I still have a small question about the 2d benchmarks annotation. For benchmarks, do you use the training data of the target domain or just operate online adaption directly on the test data of the target domain after training on the source domain(h36m)?

syguan96 commented 2 years ago

Don't be sorry. Feel free to contact me using GitHub or email. I directly adapt the source model on the test split of benchmarks.

MooreManor commented 2 years ago

Thanks for your patience! My question is solved.

zhihaolee commented 2 years ago

Hi @syguan96 As i know, the 2D annotation in 3DPW official dataset contains a lot of errors, have you refined them?

syguan96 commented 2 years ago

Yes, I reproject 3d skeleton of SMPL to image space.

zhihaolee commented 2 years ago

I had tried to do this, but the camera poses seem to be very bad, how do you deal with that? Did you release this refined 2D annotation?

syguan96 commented 2 years ago

In my experience, the annotated camera pose is accurate. You can find the refined 2D pose in File 1 (see Readme).

zhihaolee commented 2 years ago

Got it, thank you very much!