KU-CVLAB / GaussianTalker

Official implementation of “GaussianTalker: Real-Time High-Fidelity Talking Head Synthesis with Audio-Driven 3D Gaussian Splatting” by Kyusun Cho, Joungbin Lee, Heeji Yoon, Yeobin Hong, Jaehoon Ko, Sangjun Ahn and Seungryong Kim
Other
189 stars 21 forks source link

How to specify the audio for reasoning? #5

Closed einsqing closed 1 month ago

einsqing commented 2 months ago

How to specify the audio for reasoning?

joungbinlee commented 2 months ago

Thank you for using our model:)

As in previous research with talking head methods like ER NERF, the first 10/11ths of the total data are used for training, and the remaining 1/11th is used as the audio for inference.