Closed pokemonjs closed 2 years ago
hi, I used F_1 value as our metric in the paper. I think val_dice is normal although it is lower. The reason is your training datasets is not enough, you should augment your datasets, such as Mantra-Net. In our paper, the original images refer to forgery images without data augmentation.
pokemonjs @.***> 于2022年7月4日周一 13:27写道:
Hello,thanks for your code! I download and ran it,but there are some details not mentioned in the paper,would you mind helping me? I use the initial setting about the net in the code(such as lr,batch_size),but because of the time,i change the epoch from 50 to 20 and my dataset is CASIA 2.0,4982 forgery images,and use the data aug in this issue https://github.com/yelusaleng/RRU-Net/issues/9,so i got the 5*4982 dataset,and i trained in it,and got the best val_dice 0.3238,the paper does not show your metric,is my result too low?here is the training process.But in some test images,the test effect is not good,like the pic shows,is there something wrong with the dataset setting or the training? In your paper table1, the training set is built with Augmented Splicing and Original Image,how to understand the Original Image?Is it the true image without splicing/copying/pasting? Thanks!Look forward to your reply! [image: Training Process for lr-0 001] https://user-images.githubusercontent.com/22408443/177085412-0119aaed-eef9-44ab-b58d-4049d3467a68.png [image: the left gt,right infer result] https://user-images.githubusercontent.com/22408443/177086880-11d15b7f-8732-4bd3-9e80-cfc0bade3e7c.png
— Reply to this email directly, view it on GitHub https://github.com/yelusaleng/RRU-Net/issues/22, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHKD5OBWQMH6Y6B5JHZXVXDVSJY3RANCNFSM52R5UD3A . You are receiving this because you are subscribed to this thread.Message ID: @.***>
thank you for your reply and guidance,yesterday i found your pre-trained model 'best_model' in the code,it perfroms almost perfectly in the test pics,and its f1 score is 0.51 in my test set,higher than my training result 0.37,so may i know how to train the model so well? you said that i should augment the datasets,In the paper,the best rrunet f1 is almost 0.85,would you mind telling me the dataset capacity during the training?(i have augment the data to 2w,and with your crop augment it is 4w) And all of them are from the augmentation like flip/crop of CASIA? thank you!look for your reply.
The 'best_model' is trained by my training dataset, I can't release the dataset since the corresponding paper is under view. The simple data augmentation can't make that RRU-Net achieves the performance of 'best_model' although your augmentation dataset is 4w. You need datasets with significant capacity, like COCO, to generate forgery data with various diversity.
Thank you very much!Thank you for your enthusiastic help,it really helps a lot.Though the paper is done in 2019,but you still maintain the community to help others,respect!
it is my pleasure to help everyone.
Hello,thanks for your code! I download and ran it,but there are some details not mentioned in the paper,would you mind helping me? I use the initial setting about the net in the code(such as lr,batch_size),but because of the time,i change the epoch from 50 to 20 and my dataset is CASIA 2.0,4982 forgery images,and use the data aug in this issue,so i got the 5*4982 dataset,and i trained in it,and got the best val_dice 0.3238,the paper does not show your metric,is my result too low?here is the training process.But in some test images,the test effect is not good,like the pic shows,is there something wrong with the dataset setting or the training? In your paper table1, the training set is built with Augmented Splicing and Original Image,how to understand the Original Image?Is it the true image without splicing/copying/pasting? Thanks!Look forward to your reply!