Closed Hellodan-77 closed 3 years ago
There is another question. Are these data such as bfa, xia, and video trained separately?
train.py
without modifying any data - that part is done in gen_dataset.sh
. Also, note that video data is not used during our training - it's only used during test time. The default training will use xia dataset. If you want to train on the bfa dataset, config.py
needs some further modifications. You may find this relevant issue helpful.Thank you very much for your reply! If it is what you said, I have the following questions:
Regarding 1 and 3 (test2d.npz contains std and mean), I think we have already discussed them in this issue and specifically we mentioned test2d.npz
. By default, we use mean and std in test2d.npz
to normalize your video data at test time. They may not work well for your own data - if the skeletons in your video deviate much from the skeletons in the treadmill video. You can generate your own mean and std if you have a collection of video data, using code referred to in that previous issue.
My own video data is exactly the same as your treadmill video using openpose to extract the 2D joint points, including the bone point format in the generated json file is the same as yours, under such conditions, I still need Modify the mean and std you mentioned?
In that case, I think you can use the default mean and std values.
Hello! I have now generated bfa.npz and xia.npz files through gen_dataset.sh. For video data, I use my own video actions to extract joint points and output json files. I would like to ask the following questions: