Closed zsh2000 closed 3 years ago
Hi Shuhong,
Many thanks for your interest in our work!
RGB.cs
script to the GameObject, and run the Unity project. The RGB.cs
script reads in the.h5
files of the gaits.Best regards, Uttaran
Hi,
Thanks so much for your quick reply!! I will try it!
In addition, I wonder whether the experts do the annotation on the raw skeleton data, or they do it after the skeletons are rendered to RGB videos (or the AR environment).
Best regards, Shuhong
They actually did it on the raw RGB videos of the participants, from which we later extracted the skeletons and rendered those skeletons in Unity. Unfortunately, we cannot share the raw RGB videos at this point due to confidentiality constraints.
Thank you so much!! I get it!
Hi,
Thanks for the code and dataset you have offered! I find myself really interested in your work!
I have some questions about the dataset as follows:
I want to visualize the motion sequences in the dataset, but I find that the root joint (joint #0) is not at the origin of the coordinates, which is different from the ELMD dataset setting. (visualization in the first figure below) Also, when I try relocating the root joint to the origin, the scales and visual angles of the gaits vary a lot. (visualization in the second figure below) I wonder whether I need to do some preliminary works before visualizing the gaits.
The synthetic gaits in the dataset seem a little strange. After performing visualization, it looks like the person is dragging his/her legs rather than walking.
I wonder how the emotion labels are annotated. Are they annotated by pretrained models, or annotated manually?
Many thanks and best wishes! Shuhong