Could you please explain (or ideally upload the scripts) how to apply FlashAvatar for face reenactment task?
Should I train FlashAvatar on source head, run tracking on the target head, and then in test.py use viewpoints obtained from target and gaussians from the trained FlashAvatar?
I am a bit confused with the pipeline, so would be happy if you upload the reenactment script.
For face reenactment, you need to use relative expression transfer because the expression and shape are not well disentangled in FLAME tracking.
You can take a look at NeRFace to see how it is done.
Could you please explain (or ideally upload the scripts) how to apply FlashAvatar for face reenactment task?
Should I train FlashAvatar on source head, run tracking on the target head, and then in
test.py
useviewpoints
obtained from target and gaussians from the trained FlashAvatar? I am a bit confused with the pipeline, so would be happy if you upload the reenactment script.Thanks in advance!