Rudrabha / Lip2Wav

This is the repository containing codes for our CVPR, 2020 paper titled "Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis"
MIT License
699 stars 153 forks source link

training and test split for GRID and TCD-TIMIT #6

Closed WillQuCD closed 4 years ago

WillQuCD commented 4 years ago

Nice work! Could you please share the train, validation and unseen test splits for GRID and TCD-TIMIT used in your paper? The unseen means unseen speakers or unseen sentences? Do you also train one model for each speaker?

Thanks!

prajwalkr commented 4 years ago

"Unseen" means unseen sentences (not unseen speakers) because, models trained on GRID and TIMIT are speaker-specific, so one model for each speaker.

enhuiz commented 4 years ago

May the author share the specific split for fair comparisons of future works for the GRID and TCD-TIMIT dataset? Many thanks.