Closed MooreManor closed 3 years ago
For internet videos, we use the detected 2D keypoints by AlphaPose. For benchmarks, we use the annotation contained in themselves, following previous works such as ISO \etc.
Thanks for your detailed answer!
Sorry to bother you again. I still have a small question about the 2d benchmarks annotation. For benchmarks, do you use the training data of the target domain or just operate online adaption directly on the test data of the target domain after training on the source domain(h36m)?
Don't be sorry. Feel free to contact me using GitHub or email. I directly adapt the source model on the test split of benchmarks.
Thanks for your patience! My question is solved.
Hi @syguan96 As i know, the 2D annotation in 3DPW official dataset contains a lot of errors, have you refined them?
Yes, I reproject 3d skeleton of SMPL to image space.
I had tried to do this, but the camera poses seem to be very bad, how do you deal with that? Did you release this refined 2D annotation?
In my experience, the annotated camera pose is accurate. You can find the refined 2D pose in File 1 (see Readme).
Got it, thank you very much!
Hello! Thanks for your great work! I have some confusion about the 2d annotation during online adaption. Do you mean that you use the 2d ground truth key points of the test set? Or do you obtain the 2d key points as annotation after sending the frames into an off-the-shelf 2d pose estimator?