Open Stefano-retinize opened 7 hours ago
Hey @Stefano-retinize,
The dataset building process is not only for constructing the vocabulary but also for ensuring that the download and processing of HumanML3D have been completed correctly to generate the corrected sentences for the h3D dataset. Otherwise, additional errors may occur when simply loading the vocabulary object. In any case, I will share the vocabulary object as soon as possible.
The h3D model is exactly the one available here one drive link. Its config is given here: path_config_h3D
Regarding the vocabulary size mismatch, could you share the relevant output you get when running the build_data.py
?
Hey @rd20karim . Thanks for your reply. And thanks for sharing that vocab object. I uploaded both all_humanML3D.npz and sentences_corrections_h3d.csv files generated to drive so you can take a look if you want. But since my goal is to try to extrapolate your model to other dataset (always in SMPL character), I think all would be solved with that vocab object only.
Thanks for your help
Hey, I wanted to make the h3m model work, I followed all the steps to get the dataset and run the 3 steps to then build dataset from this repository. But I ended with two problems:
Is there any chance you could provide both vocab object and the config for that model?
Thanks in advance for your help!