Open gednigel opened 5 years ago
Hi @gednigel ,
May I ask you a question? How do you get the CASIA dataset? I found that their URL seems to have broken. p.s. Maybe this is what you want
Hi @heysun0728 ,
Yep, it seems broken. Well, I can send you both dataset (CASIA v1,v2) if you want.
@heysun0728 I got it on Kaggle, but they are unlabelled and though provided with source images which they called Authentic ones. Based on some researches, I personally think there should masks for the CASIA data. @BTajini Does your data have masks or other annotations for the altered objects?
@gednigel No, I had to create them myself by doing pre-processing to extract bounding boxes from pairs images.
@gednigel No, I had to create them myself by doing pre-processing to extract bounding boxes from pairs images.
HI,could you please share your processed file?
嗨@ heysun0728,
好吧,它好像坏了。好吧,如果你愿意,我可以发送给你们两个数据集(CASIA v1,v2)。
Oh, I also encountered the same problem. I can't get the CASIA dataset from the link. Will you share the data set with me? My email is l28150722@gmail.com thank you very much.
嗨@ heysun0728, 好吧,它好像坏了。好吧,如果你愿意,我可以发送给你们两个数据集(CASIA v1,v2)。
Oh, I also encountered the same problem. I can't get the CASIA dataset from the link. Will you share the data set with me? My email is l28150722@gmail.com thank you very much.
You can get the CASIA dataset on kaggle here
@gednigel @BTajini @heysun0728 hello,can you tell me how to extract bounding boxes from pairs images or share your processed file?
Hi @MatrixValhalla @xskyz , You can compute the Structural Similarity Index (SSIM) between the real/fake images and then you have to draw a bounding box considering the difference between both images thanks to the first step. Finally you cant print in json/txt files the location of each bbox. I'm using cv2 and skimage for for these two processes but it's really time-consuming. So I'm trying to create something similar to curve-gcn for NIST16,17 and NIST18 (MFC).
hello @BTajini , Thank you for your answer,can you share NIST16,17,18 dataset with me? I have tried to find resources online but have not succeeded. In addition,can you tell me how to compute the F1 score and the pixel level AUC mentioned in the paper? Thanks again~
hi @BTajini , can you also share NIST16,17,18 dataset with me? Thanks! my email: zhonghongfa09@gmail.com
Hi @xskyz and @xxzcool ,
If you want to download the three-database package (NIST16-17-18) you have to go through a signup procedure : https://www.nist.gov/itl/iad/mig/media-forensics-challenge-2019-0
Otherwise, you don't have the rights..
For F1 score : F1 score = 2 ∗ Precision ∗ Recall / Precision + Recall For pixel level AUC, I did exactly what Peng explained to you here: https://github.com/pengzhou1108/RGB-N/issues/17#issuecomment-549956171 and of course average precision (AP) metrics.
Best,
Hi,
I'm currently working on reproducing the work of your paper. I pretrained the model with the synthetic datasets and I want to fine tune it with either CASIA or NIST. However, I generated the annotation of CASIA 2.0 dataset by comparing the Authentic and the Tampered images to label the differences. I reviewed most of them but still think there should be masks or ground truth labels for this datasets. Do you have the masks or the annotations of the dataset or related download links for that. Thank you so much.