rd20karim / M2T-Segmentation

[NCAA] Official implementation of the paper Motion2Language, Unsupervised learning of synchronized semantic motion segmentation
https://arxiv.org/html/2310.10594v2
MIT License
10 stars 2 forks source link

Sharing Vocab and config? #4

Open Stefano-retinize opened 7 hours ago

Stefano-retinize commented 7 hours ago

Hey, I wanted to make the h3m model work, I followed all the steps to get the dataset and run the 3 steps to then build dataset from this repository. But I ended with two problems:

  1. All the config.yaml files are different than the model_h3D_mask_TRUE_TF0.7_D10 provided, so I've been trying to guess what are the hidden_dim, embbeded_dim and others.
  2. The Vocab Size is different than what this model_h3D_mask_TRUE_TF0.7_D10 was trained on. I ended with around 5800 vocab size. Which is different than that model.

Is there any chance you could provide both vocab object and the config for that model?

Thanks in advance for your help!

rd20karim commented 7 hours ago

Hey @Stefano-retinize,

The dataset building process is not only for constructing the vocabulary but also for ensuring that the download and processing of HumanML3D have been completed correctly to generate the corrected sentences for the h3D dataset. Otherwise, additional errors may occur when simply loading the vocabulary object. In any case, I will share the vocabulary object as soon as possible.

The h3D model is exactly the one available here one drive link. Its config is given here: path_config_h3D

Regarding the vocabulary size mismatch, could you share the relevant output you get when running the build_data.py?

Stefano-retinize commented 4 hours ago

Hey @rd20karim . Thanks for your reply. And thanks for sharing that vocab object. I uploaded both all_humanML3D.npz and sentences_corrections_h3d.csv files generated to drive so you can take a look if you want. But since my goal is to try to extrapolate your model to other dataset (always in SMPL character), I think all would be solved with that vocab object only.

Thanks for your help