Closed neshaat closed 10 months ago
Our model is training on deepfake detection datasets with different backbones that pertained on ImageNet. The released checkpoint helps users quickly test the model performance mentioned in our paper. Note that, our proposed Multi-scale Face Swap (MFS) method manipulates fakes based on fake-source image pairs (fake image and its corresponding source image). Therefore, you should reorganize your training set data format to explicitly model such image pairs before training your deepfake detector.
@Nku-cs-dsc Thank you for your response. Could you please guide me on how to evaluate your pretrained model using a single video or image? I've already downloaded the efficientnet-b3, efficientnet-b4, and resnet34.pkl models, and I attempted to run 'python3 test.py --cfg ./configs/caddm_test.cfg.' However, I encountered an error indicating the requirement of './test_images/ldm.json.' Is there a way to test your model without having to download the FF++ dataset?
Hi, we employ the Dlib to detect face landmarks. Then, we align and crop faces based on such information before feeding images into the model. Therefore, before you test the model on your self-collected videos, you should get face landmarks and dump landmark coordinates into the JSON file (ldm.json).
Hi :) and thank you for your response. I successfully obtained model outputs. I appreciate your prompt assistance!
You're welcome.
Hello there, I am reaching out to seek your guidance regarding the implementation phase. Should we begin by training the model using the dataset provided, or would it be advisable to utilize the pretrained weights and model made available through the provided link? (https://drive.google.com/file/d/1JNMI4RGssgCOl9t05jkUa6imnw5XR5id/view?usp=sharing) Thank you for your time and consideration.