CHUNYUWANG / imu-human-pose-pytorch

This is an official Pytorch implementation of "Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A Geometric Approach, CVPR 2020".
MIT License
98 stars 17 forks source link

3D visualization #2

Open lisa676 opened 4 years ago

lisa676 commented 4 years ago

Hi @CHUNYUWANG You did a great work. I want to confirm that this repository contains code for 3D visualization or just it is for validation or testing?

zhezh commented 4 years ago

Hi @lan786 we have not merged 3D visualization parts into this repo

Ly12346 commented 3 years ago

@zhezh Hello, are there any plans to incorporate the 3D visualization part into this repo in the future?

zhezh commented 3 years ago

Hi @Ly12346 We won't merge the visualization demo into this repo because it cannot run independently. However, I could briefly describe the implementation. It is developed with QT(pyqt) and pyqtgraph. We firstly capture sequence images and the corresponding 3D poses, then visualize the image by QtWidgets.QLabel and the 3D pose by gl.GLLinePlotItem. The tricky part is that you need to be very careful with the 3D coordinated transforming.

Ly12346 commented 3 years ago

@zhezh Thank you very much for sharing!

Ly12346 commented 3 years ago

Hi @zhezh I have another question to ask you. Regarding the IMU part, I did not find specific information about the original IMU mentioned in the paper. I want to know the details about the IMU part because of the experiment needs, such as the model of the IMU, or where it is You can buy it somewhere.

zhezh commented 3 years ago

@Ly12346 We use IMU measurements from the dataset "totalcapture" which uses Xsense suits of IMUs.

Ly12346 commented 3 years ago

@zhezh thank you very much。