Closed yyvhang closed 8 months ago
Hi, yes we use the frames between the start and the end, which marks the beginning and the end of an action during data collection. The motion in other frames might not have semantic information (it's more like transitions between actions) so we don't use that.
Thanks for your reply! I have one more question. In each camera_pose file: 'xxx_pv.txt', does the first line represents the initial position? If so, based on my understanding, the last value of the coordinates should be 1 (eg. [x,y,z,1]), but I found it to be [x,y,z,428] in each file. What does the '428' mean? Also, it seems that the initial positions are the same in all files. Is this initial position defined in the world coordinate system? How do you determine the origin of the world coordinate. Hope to get your help, thanks again!
I think that's the camera's intrinsic parameters. Refer to https://github.com/microsoft/HoloLens2ForCV/blob/main/Samples/StreamRecorder/StreamRecorderConverter/project_hand_eye_to_pv.py#L25 for details
Thanks for the reply. Actually, I want to make some annotations at the smplx vertices. But I found that the vertex topology of the provided smplx .obj files
is different, which is a bit strange. Usually, all smplx human models should conform to the same vertex topology (same order of the vertex sequence). May I ask if you have noticed this situation? Or Is it caused by Vposer?
I think it's because I used trimesh to export obj files ...
Hi, thanks for the great work, I want to know whether you just use the frames between the start and end frame in dataset.csv for training. If so, how do you define the start and end frame and why do not use other frames?