VISION-SJTU / RECCE

[CVPR2022] End-to-End Reconstruction-Classification Learning for Face Forgery Detection
MIT License
103 stars 14 forks source link

pretrained model #2

Closed GANG370 closed 1 year ago

GANG370 commented 2 years ago

Very interested in your work! How to train my own dataset or can you provide some pre training models on ff++ or wilddeepfake? thanks a lot

GANG370 commented 2 years ago

No such file or directory: 'path/to/config.yaml when train,is it miss something?

XJay18 commented 2 years ago

path/to/config.yaml should be set as a specific file location, e.g., config/Recce.yml

XJay18 commented 2 years ago

I retrained the model on FF++ (c23) and you can access the model parameters via this link. (Password: gn4Tzil#)

GANG370 commented 2 years ago

Thanks for your reply, According to the meaning of the paper, do you only need real pictures when training? Is my theory wrong? If I need to train my own data set, what should I do?

XJay18 commented 2 years ago

Hi, the inputs to the network contain both real and fake images. The main idea is to compute the reconstruction loss for real images only, aiming to learn the common representations of real samples. The network requires fake samples to learn discriminative features.

For training with your own dataset, you should define a custom dataloader that returns RGB images and binary labels. You may refer to the provided dataloaders under dataset/ directory and modify the code.

Blosslzy commented 2 years ago

Hi, the shared model parameter link is invalid. Can you share another one? Thank you very much ! :)

XJay18 commented 2 years ago

Hi, the previous sharing link expired. You can access the re-trained FF++ weights via this link. (Password: 7v+MRf8L)

Blosslzy commented 2 years ago

Thank you for your reply!I tested with my test.py based on your provided re-trained weights, but got about 86% AUC in FF++ c40. I random sample one frame in each video of the test dataset, and then generate the frame-level result. I wonder if the difference in less than ideal results is caused by frame-level and video-level. Can you give me some advice?

XJay18 commented 2 years ago

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

MZMMSEC commented 1 year ago

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi, I have the same problem when using the provided checkpoints to test.

When I used your provided pretrained FF++ weights (C40 version) to test the FF++ test data, I only got the 86.75 of AUC (see attached image). In fact, this results is similar to my own retraining model performance.

I didn’t change the codes except from the dataloader, which I used my own pickle file. I also used your provided face crop files in #1 to preprocess the video frames. So, I don’t know what reasons cause this problem, and maybe it is resulted from the FF++ dataset preprocessing.

Hence, would you like to share the file of your data processing or your preprocessed FF++ dataset for reference? My email is chnzm366aq@163.com. Thank you!

1663250711216

zhangchaosd commented 1 year ago

Hi, the previous sharing link expired. You can access the re-trained FF++ weights via this link. (Password: 7v+MRf8L)

Hi, this link seems to be broken, could you update it? Thanks

Blosslzy commented 1 year ago

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Thank you very much!! My email is 21112025@bjtu.edu.cn.

BeauDing commented 1 year ago

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hello, I have tried using the provided code to test the generalization performance on FF++(c40), but the results show a sharp performance drop (generally around 0.55). Considering that the only difference is the dataset, I hope you can provide a copy of the FF++ dataset. Thank you very much! My email is beauding@foxmail.com.

SongHaixu commented 1 year ago

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi,I'm really interested in your work. Can you share your dataset with me? My email is hxSng@outlook.com

Simplesss commented 1 year ago

Hi, Thank s for your job, Can you share your dataset with me? My email is c_z_chao@163.com

Ruhangs commented 1 year ago

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi, Very interested in your job. but, I'm still having trouble processing the data and not getting the results. So, would you like to share the file of your data processing or your preprocessed FF++ dataset with me? My email is ruhangs@163.com. Thank you!

rainfalj commented 1 year ago

Hi,Hi, Thank s for your job, It's also hard for me to reproduce the results . Can you share your dataset with me? My email is ctl-123-me@163.com

SaharHusseini commented 1 year ago

Hello,

Thank you for your excellent work. Could you also share your preprocessed data with me? My mail is: husseinisahar1@gmail.com

WYing333 commented 1 year ago

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi, I'm very interested in your work. But I'm still having trouble processing the data and not getting the results. So, would you like to share the file of your data processing or your preprocessed FF++ dataset with me? My email is paulwang333@gmail.com. Thank you!

zhongjian1999 commented 1 year ago

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Thank you for your excellent job, and I would appreciate it if you could provide the preprocessed dataset. My email is: zhongjiannjupt@gmail.com

xuyingzhongguo commented 1 year ago

Hello Thank you for the work. Could you please also send me the preprocessed dataset and pretrained model please? My email address is ying.xu@ntnu.no Thank you!

xarryon commented 1 year ago

Thank s for your great job! It's also hard for me to reproduce the results . Can you share your dataset with me? My email is 872122623@qq.com

whisper0411 commented 1 year ago

Hello. Thank you for your fantastic work. Could you also share your preprocessed data with me? My mail is: lxin0411@126.com Thank you a lot!!!!

QuanLNTU commented 1 year ago

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi,thank you for your code,could you share your preprocessed FF++ data with me ? my email is lntu_llq@163.com

VoyageWang commented 9 months ago

Hi, I think sampling only one frame from each video for testing may result in large variations. On average, we use about 50 frames/sequence for testing. Frame-level performance is considered. In addition, please ensure the conservative crop (enlarged by 1.3 around the central face region) is used for cropping facial images. If you still have trouble processing the data, please send me an email or leave your email here. I will share our preprocessed FF++ dataset with you.

Hi, thank you for your code, can you share the dataset with me. I really appreciate it. My email address is voyagewang@foxmail.com