Open SebastianRobertson opened 5 years ago
You don't need each subject to be partially labeled. You can have some subjects which are totally labeled and some which are totally unlabeled, and it should still work reasonably well as long as the subject does not differ much from the others (in term of bone lengths). If you know the precise bone lengths for that subject, you can boost accuracy further, but we don't have this feature in our training script (so you need to implement that manually).
If you look at the dataset structure, it's very simple, so it should be easy to figure out. It's just a NumPy archive with a nested dictionary (subject -> action -> camera).
Thanks for your advise. So i would preprocess my videos and the videos form H36M with detectron, then name the Files: (Subject/action/Camera) like they are in the Dataset and run the prepare_data_2d_generic script over all of them. Then i have my 2D Data which combinded with the 3D Data provided by the Dataset is enough to train, right? Is this the smartest/easiest way to do this? Sorry for asking these stupid questions, i appreciate your help a lot and the work you provide.
Hello, I have the same idea, but it is difficult to realize. Can you provide the code to convert the dataset into h3.6m format for semi-supervised training? Thank you.
Hi Guys,
i'm rather new to Machine Learning and this looks like it would really fit a Project of mine very well. i'd like to create a dataset based on H3.6M which i'd like to expand with my own Data fitting my Project. Basically i would like to add or exchange one Subject (my own data) and train semi-supervised for that Subject. Does that make sense or is at least a few % labeled Data required for every Subject? Is there something like a guide on how to create such a dataset or could you guys give me a few hints on what to do and what kind of Software to use?
Thanks a lot for any help in advance!