Open SunnyHaze opened 1 year ago
Hi, thx for your awesome work. I even find some mismatches between MVSS-Net paper(ICCV 2021) and the official code. I just run inference.py
and evaluate.py
from the official repo. I just commented out the apex
part of the code because it gives a segmentation fault in my environment, but I don't know why.
Here are the results I got on the four datasets columbia, coverage, nist16, and casiav1. If the results are lower than the numbers in the paper I will bold and italicize them and attach the results from the paper like this {infer result}/{paper result}.
columbia: pixel-f1: 0.6589 img level sen: 1.0000 spe: 0.3333/1.000 f1: 0.5000/0.802 auc: 0.9842
casiav1 pixel-f1: 0.4512 img level sen: 0.6163 spe: 0.9687 f1: 0.7533 auc: 0.8385
coverage: pixel-f1: 0.4826 img level sen: 0.9600 spe: 0.1400 f1: 0.2444 auc: 0.7317
nist16 pixel-f1: 0.0773/0.292
I was questioned by the reviewers many times. They keep asking me why there was such a big difference between my MVSS version and the official MVSS.
A reviewer even gave me a REJECT because of it.
Hi, thx for your awesome work. I even find some mismatches between MVSS-Net paper(ICCV 2021) and the official code. I just run
inference.py
andevaluate.py
from the official repo. I just commented out theapex
part of the code because it gives a segmentation fault in my environment, but I don't know why. Here are the results I got on the four datasets columbia, coverage, nist16, and casiav1. If the results are lower than the numbers in the paper I will bold and italicize them and attach the results from the paper like this {infer result}/{paper result}.columbia: pixel-f1: 0.6589 img level sen: 1.0000 spe: 0.3333/1.000 f1: 0.5000/0.802 auc: 0.9842
casiav1 pixel-f1: 0.4512 img level sen: 0.6163 spe: 0.9687 f1: 0.7533 auc: 0.8385
coverage: pixel-f1: 0.4826 img level sen: 0.9600 spe: 0.1400 f1: 0.2444 auc: 0.7317
nist16 pixel-f1: 0.0773/0.292
I evaluated a pretrained model file from the official repository using both the official and this repository's environments. The evaluation yielded varying results. Notably, the performance metrics from the official repository surpassed those observed in this repository. Despite this, the cause of the differences remains unclear.
Below are the evaluation results for CASIAv1+ from both repositories:
Official Repository:
Pixel-F1: 0.4512 Image Level Accuracy: 0.7901 Sensitivity: 0.6163 Specificity: 0.9900 F1 Score: 0.7597 AUC: 0.0000 Combined F1: 0.5661 This Repository:
Pixel-F1: 0.3218 Image Level Accuracy: 0.7087 Sensitivity: 0.9052 Specificity: 0.5087 F1 Score: 0.6514 AUC: 0.7742 Combined F1: 0.4308
大佬,他这个模型训练的时候是用5063张篡改图片,还用7491+5063张图片训练的呀?
大佬,他这个模型训练的时候是用5063张篡改图片,还用7491+5063张图片训练的呀?
由于MVSS既有localization(pixel-level)又有detection(image-level),因此需要有真实图片和篡改图片进行训练,本仓库较长时间没有维护,最新MVSS复现细节可以关注一下我们的最新工作IMDLBenCO:https://github.com/scu-zjz/IMDLBenCo
大佬,他这个模型训练的时候是用5063张篡改图片,还用7491+5063张图片训练的呀?
由于MVSS既有localization(pixel-level)又有detection(image-level),因此需要有真实图片和篡改图片进行训练,本仓库较长时间没有维护,最新MVSS复现细节可以关注一下我们的最新工作IMDLBenCO:https://github.com/scu-zjz/IMDLBenCo 谢谢您的回复!
Discuss about your reproduce result and conclusions in this issue. Welcome all the researchers focusing on image forensics!