jtchen0528 / PCL-I2G

Unofficial Implementation: Learning Self-Consistency for Deepfake Detection
56 stars 7 forks source link

About masks when learning with FF++ #10

Closed Takase-Syunki closed 2 years ago

Takase-Syunki commented 2 years ago

Hello, thanks for your code implementation. Please tell me how to prepare the mask when training on the FF++ dataset?

jtchen0528 commented 2 years ago

According to the paper, the mask is generated by the convex hull of 68 facial landmarks on a face (Sec 3.2 I2G). In data/I2G_dataset.py, I randomly select 32 frames (Sec 4.1), and get their landmark convex hull. That's how I prepare the mask. I believe its in the I2G generation code.

Takase-Syunki commented 2 years ago

I’m sorry. I2G How do I prepare a mask for training using "real fake" images (fake data from FF++ dataset) instead of fake images? Also, what is the process for this?

jtchen0528 commented 2 years ago

Oh, I misunderstood.

I did that by detecting faces in the original videos. FF++ fake videos are composed of 2 real videos. FF++ does list out the background videos and the videos with the swapped faces. Those videos are listed in one csv file (as I remembered, maybe I'm wrong), or you can see the filenames of the fake videos as XXX_XXX.mp4. One of them is the background video.

I detected the face landmarks in the background video and generated the mask with it. The code might not be in this github, sorry about that. So I can get accurate masks for the "real fake" images.

Takase-Syunki commented 2 years ago

Thank you very much. I see. So that's how you do it. I'm sorry, but could you upload that code to this github?

jtchen0528 commented 2 years ago

Well, I kinda lost the code, but it is very similar to the code in I2G generation. Use the part where I detect face landmark and crop out convex hull.

Takase-Syunki commented 2 years ago

I see. Thank you very much. I kind of understand how to do it.