UPC-ViRVIG / SparsePoser

Official Code for ACM Transactions on Graphics (TOG) paper "SparsePoser: Real-time Full-body Motion Reconstruction from Sparse Data"
https://upc-virvig.github.io/SparsePoser
MIT License
30 stars 3 forks source link

Could you share the evaluate code for AMASS? #3

Open Lucifer-G0 opened 2 months ago

Lucifer-G0 commented 2 months ago

I am currently working with the AMASS dataset and have encountered issues with the pre-trained model not performing as expected. I am seeking assistance to properly evaluate the model and, if possible, obtain any additional evaluation code that might be available.

Here are the results I obtained on HUMAN4D by simulating your source code . Evaluate Loss: 0.1619343272083112 Mean Per Joint Position Error: 0.06029307960557777 Mean End Effector Position Error: 0.10164124760273341

Lucifer-G0 commented 2 months ago

Here's the key code I tried.

def eval_pos_error(result, means, stds, device):
    res, offsets, filename = result

    gt_rots, gt_pos, gt_parents, gt_offsets = get_info_from_npz(filename)
    gt_rots = torch.from_numpy(gt_rots).float().to(device).unsqueeze(0)
    gt_pos = torch.zeros(gt_rots.shape[0], gt_rots.shape[1], 3)
    gt_offsets = torch.from_numpy(gt_offsets).float().to(device)

    res = res.permute(0, 2, 1)
    res = res.flatten(0, 1)
    res = res.cpu().detach().numpy()
    dqs = res
    dqs = dqs * stds["dqs"].cpu().numpy() + means["dqs"].cpu().numpy()
    dqs = dqs.reshape(dqs.shape[0], -1, 8)
    # Get rotations and translations from dual quaternions
    _, rots = from_root_dual_quat(dqs, np.array(gt_parents))
    rots = torch.from_numpy(rots).float().to(device).unsqueeze(0)
    pos = torch.zeros(rots.shape[0], rots.shape[1], 3)

    gt_joint_poses, _ = fk(gt_rots, gt_pos, gt_offsets, torch.Tensor(gt_parents).long().to(device))
    joint_poses, _ = fk(rots, pos, gt_offsets, torch.Tensor(gt_parents).long().to(device))

    # show_3d_vs(joint_poses, gt_joint_poses)

    # Error
    error = torch.norm(
        joint_poses - gt_joint_poses[:, : joint_poses.shape[1], ...], dim=-1
    )
    sparse_error = error[:, :, param["sparse_joints"][1:]]  # Ignore root joint
    return torch.mean(error).item(), torch.mean(sparse_error).item()

def get_info_from_npz(filename):
    """
    return:
        All return values are numpy arrays.
        rots: (frames, n_joints, 4) represents the rotation as a unit quaternion.
        pos: (frames, 3) represents the root node translation.
        parents: The parent-child relationship in the xsens skeleton structure.
        offsets: Shape parameters from the SMPL model, distinguishing only between male and female.
    """
    s2x_map = np.array(S2X_map)  # Mapping relationship between SMPL and xsens skeletons (parents)

    motion_data = np.load(filename)
    gender = motion_data['gender'].item().upper()
    rots = motion_data['poses'].reshape(motion_data['poses'].shape[0], 52, 3)
    pos = motion_data['trans']

    # Get skeleton graph from the SMPL model file (offsets)
    smpl_name = '../npz/SMPL_' + gender + '.npz'
    smpl_model = np.load(smpl_name)
    root0J = smpl_model["J"] - smpl_model["J"][0]
    kintree_table = smpl_model["kintree_table"]
    # Calculate offsets
    offsets = np.zeros_like(root0J)
    for i in range(1, root0J.shape[0]):
        parent_idx = kintree_table[0, i]
        offsets[i] = root0J[i] - root0J[parent_idx]

    # Convert SMPL structure data to the format required for training
    offsets = offsets[s2x_map, :]
    rots = rots[:, s2x_map, :]
    parents = xsens_parents

    # Convert rots from Euler angles to quaternions
    rot_order = np.tile(['x', 'y', 'z'], (rots.shape[0], len(parents), 1))
    rots = quat.unroll(
        quat.from_euler(rots, order=rot_order),
        axis=0,
    )
    rots = quat.normalize(rots)  # Make sure all quaternions are unit quaternions

    return rots, pos, parents, offsets
Lucifer-G0 commented 2 months ago

I am currently working with the AMASS dataset and have encountered issues with the pre-trained model not performing as expected. I am seeking assistance to properly evaluate the model and, if possible, obtain any additional evaluation code that might be available.

Here are the results I obtained on HUMAN4D by simulating your source code . Evaluate Loss: 0.1619343272083112 Mean Per Joint Position Error: 0.06029307960557777 Mean End Effector Position Error: 0.10164124760273341

The outcome is derived from the generator model, specifically the 'dancedb/generator.pt' model paired with the 'data.pt' .

JLPM22 commented 2 months ago

Hi! I appreciate your interest in our project.

One question: are you using the model_xsens or model_dancedb? For testing on HUMAN4D, we trained on DanceDB, which is included in AMASS. The pre-trained model can be found at: https://github.com/UPC-ViRVIG/SparsePoser/tree/main/python/models/model_dancedb

Lucifer-G0 commented 2 months ago

Hi! I appreciate your interest in our project.你好!感谢您对我们项目的关注。

One question: are you using the model_xsens or model_dancedb?一个问题:您是在使用model_xsens还是model_dancedb? For testing on HUMAN4D, we trained on DanceDB, which is included in AMASS. The pre-trained model can be found at: https://github.com/UPC-ViRVIG/SparsePoser/tree/main/python/models/model_dancedb为了在 HUMAN4D 上进行测试,我们在 AMASS 中包含的 DanceDB 上进行了训练。预训练模型位于: https://github.com/UPC-ViRVIG/SparsePoser/tree/main/python/models/model_dancedb

I have re-downloaded your model and thoroughly reviewed my code, but I'm still getting the same results. I'm quite confused about this. Could you please help me check my code, or provide the code you used to evaluate the AMASS dataset? I would greatly appreciate it! You can reach me at linhai.student@foxmail.com. Thank you very much!

As a supplement,here is my main code:

def main(args):
    # Set seed
    torch.manual_seed(param["seed"])
    random.seed(param["seed"])
    np.random.seed(param["seed"])

    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    print("Using device:", device)
    # Additional Info when using cuda
    if device.type == "cuda":
        print(torch.cuda.get_device_name(0))

    # Prepare Data
    eval_dir = args.data_path
    # check if train and eval directories exist
    if not os.path.exists(eval_dir):
        raise ValueError("eval directory does not exist")

    eval_files = []
    for root, dirs, files in os.walk(eval_dir):
        for file in files:
            # 检查文件是否以.npz结尾,且文件名不是shape.npz
            if file.endswith('.npz') and file != 'shape.npz':
                full_path = os.path.join(root, file)
                eval_files.append(full_path)

    eval_dataset = TestMotionData(param, 1, device)
    # Eval Files
    for filename in eval_files:
        if filename[-4:] == ".npz":
            rots, pos, parents, offsets = get_info_from_npz(filename)
            # Eval Dataset
            eval_dataset.add_motion(
                offsets,
                pos,  # only global position
                rots,
                parents,
                filename
            )
    print(eval_dataset.get_len(), " added to eval_dataset")

    # Create Models
    train_data = Train_Data(device, param)
    generator_model = Generator_Model(device, param, xsens_parents, train_data).to(device)
    # Load  Model
    means, stds = load_model(generator_model, args.model_dir, train_data, device)
    eval_dataset.set_means_stds(means, stds)
    eval_dataset.normalize()

    results = evaluate_generator(generator_model, train_data, eval_dataset)
    mpjpe, mpeepe = eval_result(results, means, stds, device)
    evaluation_loss = mpjpe + mpeepe

    print("Evaluate Loss: {}".format(evaluation_loss))
    print("Mean Per Joint Position Error: {}".format(mpjpe))
    print("Mean End Effector Position Error: {}".format(mpeepe))

I run it using "python eval.py ../HUMAN4D ../models/model_dancedb/generator.pt "

JLPM22 commented 2 months ago

I quickly checked your code and it looks good to me. However, before having a more in-depth look, I suggest the following: