HongguLiu / Deepfake-Detection

The Pytorch implemention of Deepfake Detection based on Faceforensics++
https://github.com/ondyari/FaceForensics
Apache License 2.0
293 stars 57 forks source link

some details of training ? #6

Open zj19921221 opened 4 years ago

zj19921221 commented 4 years ago

hi , i got a bad trainning result by my own way: I have some question as below: 0、 I save 1 image per 5 images every mp4, It is OK? 1、why choose size 299 if I dont resize 299 , will get a bad result? 2、i only use the HQ data and when i was training ,combine the youtube as real and all the maniplated seq as fake; is it ok?

looking forward your answer; thanks alot

HongguLiu commented 4 years ago

First,it's ok that you extract 1 image per 5 images every video. About the second question, we used the torchvision.transforms to resize the input images of netwrok. So you can resize images to 299, but it is not necessary. Last, you can just use the HQ data to train your models, but it may not have a good performance when you test you model on LQ data.

zj19921221 commented 4 years ago

thanks alot for your reply 1、whats your way of extracting and cropping face from the frame. using dlib as show in the script named "detect_from_video.py"? 2、can we add all manuplated sequences(deepfake\ face2face faceswap netrual_texual) up as fake part?

HongguLiu commented 4 years ago

1、In our experiment,we used open source MTCNN to detect face for building our dataset. In detect_from_video.py, we used dlib to detect face and crop it bacause it is fast and effcient. 2、Of course, you can builid your own dataset. But you should pay attention to keep the ratio equal of real and fake approximately.