SCLBD / DeepfakeBench

A comprehensive benchmark of deepfake detection
Other
493 stars 65 forks source link

Here are some issues about the code and paper. #28

Closed HaoJia-Alchemist closed 10 months ago

HaoJia-Alchemist commented 1 year ago

Thanks for your work which is meaningful to the Deepfake Detection. I have adapted my model to your benchmark framework by the requirements of the base_detector.py. However, here are some issues about your code and paper as follows:

  1. I used your Benchmark to test the performance of Efficientnet-b4, Xception, ResNet34 and other detectors, and the results obtained were higher than the performance shown in Table 2 of your paper. I performed data preprocessing and experiments according to the requirements in the README.md. How should I check if this is the cause of my problem.
  2. Your paper was submitted to NeurIPS 2023. I notice that the time by the "Author notification" have passed. Has your paper been accepted?
YZY-stack commented 11 months ago

Thanks for your attention. There are several things to be mentioned:

  1. It is crucial to acknowledge that varying datasets can yield different results. This benchmark is established to ensure consistent input data across various detectors, facilitating a fair and comparative evaluation of their performance. This is particularly important as the current landscape of performance comparison in this domain lacks a standardized criterion, resulting in comparisons that are neither reasonable nor comparable.

  2. Our paper was accepted by NeurIPS 2023 D&B track. We appreciate your gracious acknowledgment.