haixpham / end2end_AU_speech

Code for the paper "End-to-end Learning for 3D Facial Animation from Speech"
MIT License
70 stars 24 forks source link

labels.npy for arbitrary audio/video pairs #3

Open RaymondDixon opened 5 years ago

RaymondDixon commented 5 years ago

Hi @haixpham, thanks for making the code available for the community. I am working on reproducing your results as part of research work. You have provided a link for AU Labels for RAVDESS dataset. How would i create/acquire Labels.npy files, if i want to do train/inference on any arbitrary video/audio pair.

haixpham commented 5 years ago

Thanks for your interest in my work! AU labels are extracted using my private face tracker. There is a reason it is not available:

meherabhi commented 3 years ago

Hi @haixpham thanks for providing the data and code. In the data you have provided, do the labels correspond to the standard AU[0]...AU[46] ? or are you selecting specific activation unit numbers?