Closed point-1 closed 2 months ago
hi, our system needs video and IMU input to run. You can use view_aist function in evaluate.py to visualize the results. Before that, you should download the AIST++ video from its official site.
I tried to input offline video and IMU data. Do I need to write forward_offline function in sig_mp.py file? Is it an image or a 2D sequence of joints?
No need to rewrite the forward function. Just provide the 2d keypoints detected by mediapipe and transform them onto Z=1 plane, and the orientation and acceleration of 6 IMUs in the camera coordinate system. How to run the forward function can refer to evaluate.py.
Due to the deformity of the generated 3D model, I have a few questions to ask.
Hi, I still have some questions. Is the resulting pose data converted to the Euler Angle relative to Tpose? If I wnt to get the real value of this action, just seq='xyz' in this function rotation_matrix_to_euler_angle(r: torch.Tensor, seq='xyz')?
Hello, I just corrected it according to the way of T-pose, the generated posture is crooked, and the movement is strange. Is it because I'm missing jump sync?
Make sure to align the IMU axis with the human and camera axis. You can also check the IMU axis.
hi, I want to input video path, output a result video. What should I do? Thank you