chuangchuangtan / NPR-DeepfakeDetection

94 stars 2 forks source link

When testing on datasets containing only real images the performance is poor #9

Open SilverRAN opened 3 weeks ago

SilverRAN commented 3 weeks ago

Thanks for your wonderful work. It's a very simple but effective method when tesing on images generated by deep learning models. But I found that if I test the pretrained model with only real images, the accuracy is only around 30%. It seems the model prefer predicting "fake" rather than "real". Is this normal?

chuangchuangtan commented 3 weeks ago

I would appreciate it if you could provide me with the specific test dataset.

SilverRAN commented 2 weeks ago

I would appreciate it if you could provide me with the specific test dataset.

Sure. The testing data can be downloaded from https://github.com/VL-Group/Natural-Color-Fool/releases/download/data/images.zip

SilverRAN commented 2 weeks ago

I would appreciate it if you could provide me with the specific test dataset.

I tested the Huggingface demo and found its predictions are more accurate. Could you please provide the code for reading input in your Huggingface demo? Thanks a lot.

chuangchuangtan commented 1 week ago

Our released model did not consider disturbances such as JPEG compression and noise during training, hence it performs poorly on images in online social networks. Increasing data augmentation during the training process can improve robustness.

You can directly access the inference code on Huggingface through the "Files" tab in the upper right corner of the demo.