open-mmlab / mmpose

OpenMMLab Pose Estimation Toolbox and Benchmark.
https://mmpose.readthedocs.io/en/latest/
Apache License 2.0
5.92k stars 1.26k forks source link

Preprocess of human36m keypoint training data #703

Open FireMonkey796 opened 3 years ago

FireMonkey796 commented 3 years ago

Hello. What should I do if I want to train human36m 3d keypoint model? I only find the document with human36m mesh detection. And I can only find the procedures of testing data preprocess

ly015 commented 3 years ago

Please check it here.

FireMonkey796 commented 3 years ago

Thank you. The preprocess of 3d keypoint doesn't appear in https://github.com/open-mmlab/mmpose/blob/master/docs/data_preparation.md. Maybe you can make some modifications on it.

And I wonder which kinds of dataset I should download, and how to place them originally to run the script? hm36m

ly015 commented 3 years ago

Thanks for your suggestion, we have added the 3d keypoint into data_preparasion.md.

To run the preprocessing script, you need to download "Videos", "D2_Positions", "D3_Positions_mono" for each subject, and also the metadata in the official dataset code package. The original files should be placed like:

original
├── s1
|   ├── Videos.tgz
|   ├── D2_Positions.tgz
|   `── D3_Positions_mono.tgz
├── s2
...

And the path to the original folder and the metadata file needs to be specified in the arguments of the preprocessing script.

FireMonkey796 commented 3 years ago

Thank you. I wonder what files are contained in the metadata? Is it downloaded here? WechatIMG44

FireMonkey796 commented 3 years ago

And I find some package error Traceback (most recent call last): File "/home/qwe/Desktop/mmpose/tools/dataset/preprocess_h36m.py", line 15, in from spacepy import pycdf File "/home/qwe/anaconda3/envs/mm/lib/python3.7/site-packages/spacepy/pycdf/init.py", line 1288, in 'before import.').format(', '.join(_libpath))) Exception: Cannot load CDF C library; checked . Try 'os.environ["CDF_LIB"] = library_directory' before import.

ly015 commented 3 years ago

Please find the "metadata.xml" file in the v1.2 software package: image

@cherryjm Could you please help with the pycdf issue?

cherryjm commented 3 years ago

And I find some package error Traceback (most recent call last): File "/home/qwe/Desktop/mmpose/tools/dataset/preprocess_h36m.py", line 15, in from spacepy import pycdf File "/home/qwe/anaconda3/envs/mm/lib/python3.7/site-packages/spacepy/pycdf/init.py", line 1288, in 'before import.').format(', '.join(_libpath))) Exception: Cannot load CDF C library; checked . Try 'os.environ["CDF_LIB"] = library_directory' before import.

It is probably due to unsuccessful installation of NASA CDF library. Please refer to the official installation guide. If you still have trouble with pycdf, alternatively, you can try another Python package -- cdflib, which should be easier to install.

FireMonkey796 commented 3 years ago

Thank you! I have successfully run the script to preprocess the data. But I met another problem when I am going to train human36m key-point model: _FileNotFoundError: Body3DH36MDataset: [Errno 2] No such file or directory: '/media/qwe/Windows/1TB_dataset/mmpose_3d_data/annotation_body3d/fps50/h36mtrain.npz' How can I obtain the folder "50fps"?

cherryjm commented 3 years ago

The folder "50fps" will be generated automatically by running the preprocessing script with sample_rate=1. The default sample_rate is 5, which will result in 10fps annotations.

FireMonkey796 commented 3 years ago

So the preprocess file should be run twice to enable training for human36m key-point, right? One for sample_rate=1 and one for sample_rate=5

cherryjm commented 3 years ago

It depends on which model you use. For example, we use 50fps data to train and evaluate SimpleBaseline3D. Please refer to configs for detailed data configurations.

Ared521 commented 2 years ago

Please find the "metadata.xml" file in the v1.2 software package: image

@cherryjm Could you please help with the pycdf issue?

Hello, can v1.0 metadata.xml be used for data set preprocessing? I don't have the v1.2 version.