KU-CVLAB / GaussianTalker

Official implementation of “GaussianTalker: Real-Time High-Fidelity Talking Head Synthesis with Audio-Driven 3D Gaussian Splatting” by Kyusun Cho, Joungbin Lee, Heeji Yoon, Yeobin Hong, Jaehoon Ko, Sangjun Ahn and Seungryong Kim
Other
257 stars 31 forks source link

Why is the deformation model moved to cpu #51

Open Saksham209 opened 1 week ago

Saksham209 commented 1 week ago

I noticed that while training, the deformation model is moved to cpu in the fine stage. In the coarse stage the training is quite fast while the fine stage takes up a considerably long time. Is there a specific reason to do this?

I am referring to this piece of code.

if stage == "fine" and first_iter == 0: gaussians.mlp2cpu()

joungbinlee commented 1 week ago

Hello, thanks for using our project.

The code above is used to unload unused models from the GPU during the fine stage. All models run on the GPU, and while the first stage is relatively fast, the second stage takes about 1 to 2 hours.

Thank you.

Saksham209 commented 1 week ago

Hello, thanks for using our project.

The code above is used to unload unused models from the GPU during the fine stage. All models run on the GPU, and while the first stage is relatively fast, the second stage takes about 1 to 2 hours.

Thank you.

Okay got it .

Also I was going through the research paper and it mentions using LPIPS loss but I couldn't find it in the training code. I did see a perceptual loss term, was LPIPS replaced by this. If yes then could you clarify why was this change made

Thank you