rshaojimmy / MultiModal-DeepFake

[TPAMI 2024 & CVPR 2023] PyTorch code for DGM4: Detecting and Grounding Multi-Modal Media Manipulation and beyond
Other
369 stars 28 forks source link

About manipulated face re-render #40

Open YcZhangSing opened 3 months ago

YcZhangSing commented 3 months ago

Could you please give some information about the process you mentioned in your paper,"After obtaining the manipulated face If emo, we re-render it back to the original image Io to obtain the manipulated sample Ia. Bbox ybox is also provided."

How do you achieve it, by some re-render model or tools?

Thank you !

rshaojimmy commented 3 months ago

We use some classical re-render methods, such as Poisson Blending. Thanks.

YcZhangSing commented 2 months ago

Thank you for your reply. But when I used the re-render method you provided, I found that when re-rendering the edited face image output by HFGI or Styleclip back to the original image, I need to accurately locate the face area in the original image and the face area of HFGI or Styleclip. Otherwise, the render result will have obvious illusions. How did you solve this problem? Thank you for your timely reply!