nii-yamagishilab / ClassNSeg

Implementation and demonstration of the paper: Multi-task Learning for Detecting and Segmenting Manipulated Facial Images and Videos
BSD 3-Clause "New" or "Revised" License
80 stars 12 forks source link

Dataset splits used for the experiments #2

Closed nviable closed 5 years ago

nviable commented 5 years ago

Would you happen to have a documentation of sort for the splits that you used for the experimentation?

My group wanted to get the exact data splits so that we hopefully could directly compare the performance of any methods we devise in the future to yours. And this would hopefully help anyone else in the field.

You did mention in your paper that the spit was 704 | 150 | 150, but that is probably for the older FaceForensics dataset (since the new one has 1000 total videos per type). Not really sure how similar the two datasets are.

Each dataset was split into 704 videos for training, 150 forvalidation, and 150 for testing.

FaceForensics++ dataset has JSON files documenting the exact splits which is pretty useful (although theirs is 720 | 140 | 140), but can't be sure whether your group used it since it was somewhat newly introduced. https://github.com/ondyari/FaceForensics/tree/master/dataset/splits

Edit: added a mention of FaceForensics++ split

honghuy127 commented 5 years ago

As you can see in Table 1 in the paper, we split the FaceForensics++ data to 720 | 140 | 140. We used the information from the provided JSON files.

nviable commented 5 years ago

Got it, thanks!