Doubiiu / CodeTalker

[CVPR 2023] CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior
MIT License
515 stars 57 forks source link

Vocaset-test lip sync error comparison results #35

Closed khalidhnv closed 1 year ago

khalidhnv commented 1 year ago

Hello & thanks for your work. I was wondering about lip sync error comparison for VOCASET-test data. I saw it reported for BIWI but couldn't find the one for VOCASET in the paper. Please let me know if I'm missing something

Doubiiu commented 1 year ago

Hi, please refer to section 4.2 of our paper. We quantitatively evaluate and compare on BIWI-Test-A, as it contains the same subjects as the training set (we call them "seen" subjects). While for VOCASET, following FaceFormer's data split, the test set contains only the unseen subjects w.r.t. training. Thus, it is not suitable for quantitative evaluation.

khalidhnv commented 1 year ago

I see, thanks for explanation. closing this issue then