megvii-research / CADDM

Official implementation of ID-unaware Deepfake Detection Model
Apache License 2.0
146 stars 21 forks source link

About set of train data #37

Closed Gnonymous closed 6 months ago

Gnonymous commented 7 months ago

I would like to know how the data for the experimental training is set up, how many frames are taken from one video? For example, how many frames are taken for each of the 5 methods?(youtube, Deepfakes, FaceSwap...) As shown in the code of extract_frames_ldm_ff++.py, is it only one frame taken from each video? Thanks for your reply and work!

Nku-cs-dsc commented 7 months ago

We sample 32 frames in a video during the training phase.