Open Dian-Yi opened 4 years ago
Is the training process like deepfakes/swap or deepfacelab? and only swap the trained face in test process?
An early work by Bao et al. has vary similar approach with ours.
Content/style disentanglement fashion has been used in many papers for image translation, e.g. DRIT and MUNIT, with their own tweaks in objectives and model architectures. In our case, we added prior knowledge of human face as inputs and loss functions as well.
thanks for u answer, i reimplement the this paaper 《Towards Open-Set Identity Preserving Face Synthesis》 by Bao et al. But the result is not good for faces not in train set and their loss is hard to convergence. So i want to how to implement sawping arbitrarily face with only one model?
i study the face swap recently. most papaer's model can only swap special peopel face. i want to konw it is the same?