Xinyu-Yi / TransPose

A real-time motion capture system that estimates poses and global translations using only 6 inertial measurement units
https://xinyu-yi.github.io/TransPose/
GNU General Public License v3.0
372 stars 72 forks source link

IMU data #14

Closed NoLoPhe closed 2 years ago

NoLoPhe commented 2 years ago

I am using BMX055 sensor as input. It's a 9-dof sensor that includes Accelerometer, Gyroscope, Magnetometer.

IMU data, each sensor

Accelerometer

   [ xAccl, yAccl, zAccl]        [3 x 1]

Gyroscope

   [ xGyro, yGyro, zGyro]        [3 x 1]

Magnetometer

   [ xMag, yMag, zMag]        [3 x 1]

Model input data, each sensor

Accelerations

   [ 1.0883e+00,  2.7423e+00,  1.7926e+00]        [3 x 1]

Rotations

   [[ 0.2445,  0.1298,  0.9609],       [3 x 3]
   [-0.9482,  0.2391,  0.2090],
   [-0.2026, -0.9623,  0.1815]]

How to convert IMU data to model input data? Thankyou!

NoLoPhe commented 2 years ago

image

Quaternion

Is there a new concept called Quaternion? I don't understand this concept and how to convert.

NoLoPhe commented 2 years ago

image

I also read in the DIP paper, rotation RIS : FS → FI, local → global I'm still reading to find the conversion formula

Xinyu-Yi commented 2 years ago

Because we use Noitom IMU sensor, we need Noitom software. You don't need to install this.

You need to perform the T-pose calibration first. Check DIP or my supplementary. The codes in live_demo.py also contain the calibration process. Note that the acceleration read from Noitom is global, and you may need to change the codes a bit as most other sensors give sensor-local accelerations.

After the calibration, you can concatenate the measurements as input.

Xinyu-Yi commented 2 years ago

Quaternion is a 4-d rotation representation. You don't need this.

NoLoPhe commented 2 years ago

Thanks, I read the concepts in section A1 A2

Xinyu-Yi commented 2 years ago

You can just put your six IMU on the desk, and run the calibration codes in live_demo.py. If there is no error, the input should be all identity matrices and zero accelerations.

NoLoPhe commented 2 years ago

Thanks, if there is any problem I will ask again :))

Xinyu-Yi commented 2 years ago

Hi, I have released the unity visualizer, in this repo!

NoLoPhe commented 2 years ago

Yeah! You shared Unity code :)) What's your upcoming article? Is an article about pose estimation.

Xinyu-Yi commented 2 years ago

Yes, the same as this one, and solves the jitter and latency problem, which gives very smooth and physically-plausible motions. It will be presented in CVPR2022. Later I will put it on arxiv.

NoLoPhe commented 2 years ago

That's great! Looking forward to the article.

VimalMollyn commented 2 years ago

Ooo let us know when you post it on arxiv! Congrats!

Xinyu-Yi commented 2 years ago

Hi, I have put the new paper on the homepage https://xinyu-yi.github.io/PIP/ and will soon submit to arxiv

VimalMollyn commented 2 years ago

Awesome results! Congrats!

NoLoPhe commented 2 years ago

Thanks and congratulations for the work you do! The Transpose paper is great, I can't wait to read the PIP paper :))

NoLoPhe commented 2 years ago

You need to perform the T-pose calibration first. Check DIP or my supplementary. The codes in live_demo.py also contain the calibration process. Note that the acceleration read from Noitom is global, and you may need to change the codes a bit as most other sensors give sensor-local accelerations.

After the calibration, you can concatenate the measurements as input.

Hi @Xinyu-Yi ,

In fact, I don't have enough IMU sensor to perform the T-pose calibration.

In the Totalcapture dataset includes Accel Gyro and Magnetometer IMU dataset. Follow:

[num sensors] [num frames] [frame 1] [sensor 1 name] [IMU local ori quat (w x y z)] [IMU local accel (x y z)] [Gyro (x y z)] [Mag (x y z)] . . . [sensor n name] [IMU local ori quat (w x y z)] [IMU local accel (x y z)] [Gyro (x y z)] [Mag (x y z)] [frame 2] [sensor 1 name] [IMU local ori quat (w x y z)] [IMU local accel (x y z)] [Gyro (x y z)] [Mag (x y z)] . . . [sensor n name] [IMU local ori quat (w x y z)] [IMU local accel (x y z)] [Gyro (x y z)] [Mag (x y z)] [frame 3] . .

From this raw dataset,

What can I do with this dataset without performing the T-post calibration? One more problem, sensor-local accelerations, I still can't convert to global. Please help me!

Xinyu-Yi commented 2 years ago

Hi, I'm not sure what problems you encountered. Please note that an IMU typically measures the sensor orientation expressed in the global inertial frame (e.g., north-east-earth), and the sensor acceleration expressed in the sensor local frame. However, as we can tie an IMU onto our body with arbitrary orientation, the sensor orientation is never equal to the bone orientation, which is used as the input. To get the bone orientation from the sensor orientation, we need to get the relative rotation between sensor and bone, which we call calibration. So in our work, we first put a sensor with its orientation the same as our body (x-left, y-up, z-forward), and this sensor will give the orientation of our body (every bone in T-pose) expressed in the global inertial frame. Then, we stand in T-pose and record the orientation of each sensor. This will give the sensor orientation expressed in the global inertial frame. We then calculate the rotation offset for each sensor and save them. During inference, we use the offset to get the bone orientation from the sensor orientation, and transform to the world space. Regarding the acceleration, we first multiply the sensor orientation with the sensor acceleration expressed in the sensor local frame. This will result in the sensor acceleration in the global inertial frame. Then we use the rotation matrix of the global inertial frame expressed in the world frame to left-multiply the acceleration, which results in the acceleration expressed in the world frame. Please see our supplementary (or DIP) for details. In summary, we must know the calibration matrix (rotation offset, global inertial frame-world frame transformation). Ohterwise, no inertial system for human motion capture can work. If you use a dataset, you need to find out its data format and transform to the bone orientation/acceleration expressed in world frame (e.g., the root coordinate frame at the beginning).

Xinyu-Yi commented 2 years ago

If you are using the live demo codes, you need to follow the instructions. You need to identify your sensors' axes first, which is provided by the sensor producer. Then, you need to align the sensor's x-axis with your left, y-axis upwards, and z-axis forward (which direction you are facing to), and record its orientation measurements (= T-posed bone orientation expressed in the global inertial frame = rotation of the world frame expressed in the global inertial frame). Then, wear all IMUs and stand in T-pose (facing the same direction as the previous step), record the orientation of every sensor (sensor orientation expressed in the global inertial frame). Next, calculate the 6 bone-sensor rotation offsets (which is done by our codes). During inference, use the calibration matrices to transform the measurements. The acceleration depends on whether your sensor gives free global accelerations or local accelerations. So you need to change the code a bit. What we need is the acceleration expressed in the world frame.

NoLoPhe commented 2 years ago

Thank you for patiently explaining the details to me!