Xinyu-Yi / TransPose

A real-time motion capture system that estimates poses and global translations using only 6 inertial measurement units
https://xinyu-yi.github.io/TransPose/
GNU General Public License v3.0
392 stars 77 forks source link

Request for Guidance on Adapting TransPose with MobilityLab IMU Sensors #62

Closed deckerjulian closed 8 months ago

deckerjulian commented 1 year ago

Dear Xinyu-Yi,

I am currently working on adapting your TransPose code for use with MobilityLab IMU sensors. While the initial T-Pose calibration appears successful, I'm encountering issues during movement, particularly with acceleration and orientation estimation.

Could this be related to the calibration method? I'm unsure about how to correctly set the 'acc_scale' variable for a different IMU system. This might be the root of the problem.

For your reference, the sensor placement strictly follows the guidelines outlined in your paper. The subject remains in a T-position for the first 30 seconds, ensuring the left forearm sensor is aligned with the reference frame (x = Left, y = Up, z = Forward).

I have attached the dataset recorded under these conditions for your review. Your insights or suggestions on how to resolve these issues would be greatly appreciated.

Thank you for your time and assistance.

Best regards,

Julian

The recorded Dataset: 20231113-140810-6_13112023_2.zip

The code which I'm using:

# Script to process data from an APDM Mobility Lab recording using deep learning models

import h5py
import torch
from articulate.math.angular import quaternion_to_rotation_matrix
from net import TransPoseNet
from config import paths
import articulate as art

# Initialize lists for storing sensor data from the APDM recording
accs_results = []  # Stores accelerometer data for each sensor
quat_results = []  # Stores quaternion data for each sensor

# Constants for scaling accelerometer data and defining sample frequency
acc_scale = 30  # Scale factor for accelerometer data
apdm_data_frequency = 128  # APDM data sample frequency (in samples per second)
first_imu_sample = 10  # Starting time (in seconds) for IMU samples
t_pose_time = 10  # Duration (in seconds) for T-pose

# Setting up the CUDA device for running the deep learning model
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
net = TransPoseNet().to(device)

# Filename for the APDM Mobility Lab data in HDF-5 format
h5_filename = "20231113-140810-6_13112023_2.h5"

def h5_to_dict(h5_filename):
    """
    Function to convert an APDM HDF-5 File into a dictionary.

    Args:
        h5_filename (str): Filename of the HDF-5 file.

    Returns:
        dict: A dictionary representation of the HDF-5 file data.
    """
    def recursively_load_h5_to_dict(h5_group):
        """
        Helper function to recursively load HDF-5 groups and datasets into a dictionary.

        Args:
            h5_group: HDF-5 group object.

        Returns:
            dict: Nested dictionary of the group's datasets and subgroups.
        """
        d = {}
        for key in h5_group.keys():
            item = h5_group[key]
            if isinstance(item, h5py.Dataset):  # Check if it is a dataset
                d[key] = item[()]  # Retrieve the data
            elif isinstance(item, h5py.Group):  # Check if it is a group
                d[key] = recursively_load_h5_to_dict(item)
        return d

    with h5py.File(h5_filename, "r") as f:
        return recursively_load_h5_to_dict(f)

# Load and convert the HDF-5 MobilityLab file into a dictionary
result_dict = h5_to_dict(h5_filename)

# Sensor IDs for the specific body parts
sensor_order = ['13118', '13113', '5250', '5248', '5242', '5240']  # Order: left forearm, right forearm, left lower leg, right lower leg, head, pelvis
length_data = len(result_dict['Sensors'][sensor_order[0]]['Accelerometer'])

# Calibration and normalization
# Process sensor's data for Calibration and Normalization
time_tpose_and_1imu = ((t_pose_time+first_imu_sample)*apdm_data_frequency)
# Select only Data for Calibration and Normalization
for x in range(length_data-time_tpose_and_1imu):
    accs_for_frame = []
    quat_for_frame = []

    # Iterate over each sensor
    for sensor in sensor_order:
        # Process accelerometer data
        acc_frame = result_dict['Sensors'][sensor]['Accelerometer'][x]
        acc_frame_torchtensor = -torch.tensor(acc_frame, dtype=torch.float32) # Negativ ??? self._acc_buffer = self._acc_buffer[tranc:] + [-d[:, 10:13].astype(float) * 9.8]
        accs_for_frame.append(acc_frame_torchtensor.unsqueeze(0)) 

        # Process quaternion data
        quaternion_frame = result_dict['Processed'][sensor]['Orientation'][x]
        quat_for_frame.append(torch.tensor(quaternion_frame, dtype=torch.float32).unsqueeze(0))

    # Combine data for all sensors for the current frame
    accs_for_frame = torch.cat(accs_for_frame, dim=0)
    accs_results.append(accs_for_frame.unsqueeze(0))

    quat_for_frame = torch.cat(quat_for_frame, dim=0)
    quat_results.append(quat_for_frame.unsqueeze(0))

# Concatenate data from all frames
acc = torch.cat(accs_results, dim=0)
quat = torch.cat(quat_results, dim=0)

# IMU 1 Quaternions mean for seconds
n_seconds_imu1_quat_mean = quat[:first_imu_sample*apdm_data_frequency,0].mean(dim=0)
smpl2imu = quaternion_to_rotation_matrix(n_seconds_imu1_quat_mean).view(3, 3).t()

# Mean for seconds of all sensors
quat = quat[first_imu_sample*apdm_data_frequency:(first_imu_sample*apdm_data_frequency)+(t_pose_time*apdm_data_frequency)].mean(dim=0)
accs = acc[first_imu_sample*apdm_data_frequency:(first_imu_sample*apdm_data_frequency)+(t_pose_time*apdm_data_frequency)].mean(dim=0)

oris = quaternion_to_rotation_matrix(quat)

device2bone = smpl2imu.matmul(oris).transpose(1, 2).matmul(torch.eye(3))
acc_offsets = smpl2imu.matmul(accs.unsqueeze(-1))   # [num_imus, 3, 1], already in global inertial frame

print("Calibration and normalization calculation done")

# Raw Data selection
accs_data_frames = []
quat_data_frames = []
#Generate frame data for accs and quaternions
for y in range(time_tpose_and_1imu+1000,length_data):
    accs_for_frame = []
    quat_for_frame = []

    for sensor in sensor_order:
        quat_raw = result_dict['Processed'][sensor]['Orientation'][y]
        quat_for_frame.append(torch.tensor(quat_raw, dtype=torch.float32).unsqueeze(0))

        acc_frame = result_dict['Sensors'][sensor]['Accelerometer'][y]
        acc_frame_torchtensor = -torch.tensor(acc_frame, dtype=torch.float32) # Negativ ??? self._acc_buffer = self._acc_buffer[tranc:] + [-d[:, 10:13].astype(float) * 9.8]
        accs_for_frame.append(acc_frame_torchtensor.unsqueeze(0)) 

     # Combine data for all sensors for the current frame
    accs_for_frame = torch.cat(accs_for_frame, dim=0)
    accs_data_frames.append(accs_for_frame.unsqueeze(0))

    quat_for_frame = torch.cat(quat_for_frame, dim=0)
    quat_data_frames.append(quat_for_frame.unsqueeze(0))

# Concatenate data from all frames
accs_data_frames = torch.cat(accs_data_frames, dim=0)
quat_data_frames = torch.cat(quat_data_frames, dim=0)

# Calibration and normalization 
data_calibrated_normalized = [] 
for z in range(len(accs_data_frames)):

    quat_raw_framed = quat_data_frames[z]
    quat_raw_framed = quaternion_to_rotation_matrix(quat_raw_framed).view(1, 6, 3, 3)

    accs_raw_framed = accs_data_frames[z]

    acc_cal = (smpl2imu.matmul(accs_raw_framed.view(-1, 6, 3, 1)) - acc_offsets).view(1, 6, 3)
    ori_cal = smpl2imu.matmul(quat_raw_framed).matmul(device2bone)

     # normalization
    acc = torch.cat((acc_cal[:, :5] - acc_cal[:, 5:], acc_cal[:, 5:]), dim=1).bmm(ori_cal[:, -1]) / acc_scale
    ori = torch.cat((ori_cal[:, 5:].transpose(2, 3).matmul(ori_cal[:, :5]), ori_cal[:, 5:]), dim=1)
    data_nn = torch.cat((acc.flatten(1), ori.flatten(1)), dim=1).to(device)
    data_calibrated_normalized.append(data_nn)

data_calibrated_normalized = torch.cat(data_calibrated_normalized, dim=0)

print("Data calibrated and normalized")

pose, tran = net.forward_offline(data_calibrated_normalized)     # offline
#pose, tran = [torch.stack(_) for _ in zip(*[net.forward_online(f) for f in data_calibrated_normalized])]   # online
art.ParametricModel(paths.smpl_file).view_motion([pose], [tran])
Xinyu-Yi commented 1 year ago

Can you send me a video of your result? yixy20@mails.tsinghua.edu.cn

deckerjulian commented 1 year ago

Hey, thanks for your fast Response, of course!

https://github.com/Xinyu-Yi/TransPose/assets/10848788/80797453-6c76-4688-af9e-1961992e8443

Xinyu-Yi commented 1 year ago

From the video, it seems that the calibration is performed correctly. I think you can check the following things: 1) The unit of acceleration should be m/s^2, not g. You should not modify acc_scale/vel_scale in any case. 2) The accelerations should be expressed in the M frame (the same as the reference frame of the rotation RMB). Note that many sensors give accelerations in the sensor frame. You need to transform it to the global frame and remove g (9.8) from the measurement. 3) Input imu in 60 fps.

Xinyu-Yi commented 1 year ago

If the codes run correctly, the results should be better than your video. Please also check: is the orientation measurement of the IMUs changing? It seems from the video that the orientation measurements of the arms are not changing. Are you waving your arms during walking?

deckerjulian commented 1 year ago

Hey, thanks for your fast response.

  1. The MobilityLab is providing the Data in m/s^2
  2. I took the Routine for the M frame reference from your live_demo code is that right? The acceleration is normally given in the sense of the sensor frame.
 input('Put imu 1 aligned with your body reference frame (x = Left, y = Up, z = Forward) and then press any key.')
    print('Keep for 3 seconds ...', end='')
    oris = imu_set.get_mean_measurement_of_n_second(num_seconds=3, buffer_len=200)[0][0]
    smpl2imu = quaternion_to_rotation_matrix(oris).view(3, 3).t()

    input('\tFinish.\nWear all imus correctly and press any key.')
    for i in range(3, 0, -1):
        print('\rStand straight in T-pose and be ready. The celebration will begin after %d seconds.' % i, end='')
        time.sleep(1)
    print('\rStand straight in T-pose. Keep the pose for 3 seconds ...', end='')
    oris, accs = imu_set.get_mean_measurement_of_n_second(num_seconds=3, buffer_len=200)
    oris = quaternion_to_rotation_matrix(oris)
    device2bone = smpl2imu.matmul(oris).transpose(1, 2).matmul(torch.eye(3))
    acc_offsets = smpl2imu.matmul(accs.unsqueeze(-1))   # [num_imus, 3, 1], already in global inertial frame

In your code there's a code line in the buffer python self._acc_buffer = self._acc_buffer[tranc:] + [-d[:, 10:13].astype(float) * 9.8] with a negativ acc. Is this needed?

  1. I will try that
  2. Yes I'm waving the hands.
  3. For my understanding: at first the IMU 1 (left forearm) is placed in the world frame like (x = Left, y = Up, z = Forward). The others sensors doesn't matter how they are placed at this point? During T-Pose the Palm is pointing down-wards?

Many Thanks for your reply.

Xinyu-Yi commented 8 months ago

I'm sorry, I have forgotten your question and replied late. If you still have any questions, you can directly reopen this issue and ask me here.