Open unintendedly opened 5 months ago
Sorry at this moment I don't have much time to clean up training code, but can you let me know significant discrepancies more specifically? i am very curious and can help
The AP value in the RVFA category is 90.6, which is significantly higher than the 70.6 mentioned in the original text. However, the AUC for the RVFA category is 74.6, slightly lower than the 80.5 in the original text. The most puzzling part is that the AUC for the FVRA category is only 74.1, which is much lower than the 93.7 reported in the original text. I believe this result is quite abnormal and difficult to explain. Moreover, the AUC of 74.1 for the FVRA category is the average from multiple experiments, so it should not be a coincidence.
Sorry at this moment I don't have much time to clean up training code, but can you let me know significant discrepancies more specifically? i am very curious and can help
你好 可以提供下训练代码吗 不整理也可以 sixt@bupt.edu.cn
Sorry at this moment I don't have much time to clean up training code, but can you let me know significant discrepancies more specifically? i am very curious and can help
你好 可以提供下训练代码吗 不整理也可以 sixt@bupt.edu.cn
我没有做训练,当时只是直接用他训练好的模型来进行测试的。
Thank you for your outstanding contribution and excellent work. I am very interested in your work, but I noticed significant discrepancies in the results when testing on FakeAVCeleb. Could you provide the complete training code? Additionally, regarding the dataset partitioning, how are the forged videos sampled? What are the proportions of different categories, such as forged video with forged audio versus forged video with real audio?