Hi, thanks for the open-source! I ran /scripts/preprocess_dataset.py but I don't think the 6d pose sequences were extracted, can you please let me know how to do this? Do you use Mediapipe or some other tool?
Is audio2pose also trained on the same 1 hour of data used to train audio2mesh?
Hi, thanks for the open-source! I ran
/scripts/preprocess_dataset.py
but I don't think the 6d pose sequences were extracted, can you please let me know how to do this? Do you use Mediapipe or some other tool?Is audio2pose also trained on the same 1 hour of data used to train audio2mesh?