Hello,
I have followed the AMASS_DNN tutorial.
I have downloaded all npz files from your website and tried both splits you suggested:
amass_splits = { 'vald': ['SFU',], 'test': ['SSM_synced'], 'train': ['MPI_Limits'] }
vs
amass_splits = { 'vald': ['HumanEva', 'MPI_HDM05', 'SFU', 'MPI_mosh'], 'test': ['Transitions_mocap', 'SSM_synced'], 'train': ['CMU', 'MPI_Limits', 'TotalCapture', 'Eyes_Japan_Dataset', 'KIT', 'BML', 'EKUT', 'TCD_handMocap', 'ACCAD'] }
However, for both splits, I receive the same size as the dataset with only 1182 samples for training, 854 for validation, and 56 for testing. I was under the impression that I could generate a much large dataset of 3D point clouds. Am I missing anything?
Hello, I have followed the AMASS_DNN tutorial. I have downloaded all npz files from your website and tried both splits you suggested:
amass_splits = { 'vald': ['SFU',], 'test': ['SSM_synced'], 'train': ['MPI_Limits'] }
vsamass_splits = { 'vald': ['HumanEva', 'MPI_HDM05', 'SFU', 'MPI_mosh'], 'test': ['Transitions_mocap', 'SSM_synced'], 'train': ['CMU', 'MPI_Limits', 'TotalCapture', 'Eyes_Japan_Dataset', 'KIT', 'BML', 'EKUT', 'TCD_handMocap', 'ACCAD'] }
However, for both splits, I receive the same size as the dataset with only 1182 samples for training, 854 for validation, and 56 for testing. I was under the impression that I could generate a much large dataset of 3D point clouds. Am I missing anything?