Closed macaodha closed 6 years ago
actually, it is same as the original repo. It is the normalized data.
best.
So it's the same as running their data loading and then preprocessing code (e.g. including projection, keypoint exclusion, etc)?
Thanks
yes
Thanks for all your help. When I load the data with your code it is the following size: Train size 1559752 Test size 548819
From the original tensorflow the train is the same but the test is bigger: Test size 550644
Any ideas why it might be different.
sorry, I have fixed this and updated the data.
The videos provided by the Human3.6M contain a damaged video, so the test set is less if using stacked hourglass to predict 2d pose, and I mistakenly missed this one when processing groundtruth data.
I will upload the data processing code. Thanks!
Thanks
@weigq, @macaodha Btw, as a supplementary information;
This action has no video in the Human3.6M dataset, Subject 11 Action Directions Camera 54138969 (no video)
These below actions has smaller annotations than expected in the global coordinate, not image plane Subject 9 Action Greeting Subject 9 Action SittingDown_1 Subject 9 Action Waiting_1 Subject 9 Action Walking
Good Work!
@salihkaragoz Great! Thanks for your supplements.
I will closed this issue, you can reopen it if needed.
Hi there,
Your code looks great. I was just wondering what is the main difference between your pre-processed dataset and
h36m.zip
from the original tensorflow repo.Thanks