sstzal / DFRF

[ECCV2022] The implementation for "Learning Dynamic Facial Radiance Fields for Few-Shot Talking Head Synthesis".
MIT License
335 stars 40 forks source link

Clarification on how to use this? #22

Closed AIMads closed 1 year ago

AIMads commented 1 year ago

Great work with alot of potential, but we need clarification on how to use it.

  1. After you have trained a model, how do you then change the audio so you can do a new rendering on that audio instead of the one used for training?

  2. How do you create this as an output video with the rendered images and the audio?

I have trained a model on my own video clip now, but don't now what to do with it.

CODER4715 commented 1 year ago

+1

CODER4715 commented 1 year ago

I appreciate your work,and i want to know how to make a new video with a new audio and trained model

exceedzhang commented 1 year ago

me too

wzx7084 commented 1 year ago

+1

dafang commented 1 year ago

+1 @sstzal

sstzal commented 1 year ago

Refer to https://github.com/sstzal/DFRF/issues/10