In the paper "Deferred Neural Rendering: Image Synthesis using Neural Textures", the paper this repo is based off of, they explain how they can use a generated UV map of a 3D model of a face, and the rgb of the face, to generate a "deepfaked" face output, with their Neural Rendering method. I understand this is possible with this repo for a single facial pose but I would like to use this, like in the demonstration, with an animated UV and texture so-to-speak, where the camera doesn't move but the model/UV and texture does.
I have been using another repo Face2face to generate the model and transfer the expressions with some custom code and it would be absolutely incredible if I can realistically neurally render it with this repo.
Is this possible?
(Even if it isn't, I kinda want to try anyway and see what I get just for fun)
In the paper "Deferred Neural Rendering: Image Synthesis using Neural Textures", the paper this repo is based off of, they explain how they can use a generated UV map of a 3D model of a face, and the rgb of the face, to generate a "deepfaked" face output, with their Neural Rendering method. I understand this is possible with this repo for a single facial pose but I would like to use this, like in the demonstration, with an animated UV and texture so-to-speak, where the camera doesn't move but the model/UV and texture does.
I have been using another repo Face2face to generate the model and transfer the expressions with some custom code and it would be absolutely incredible if I can realistically neurally render it with this repo.
Is this possible?
(Even if it isn't, I kinda want to try anyway and see what I get just for fun)