Closed chaofengc closed 2 years ago
Hi!
Thanks for your question!
We also discovered this problem during our initial experiments. One reference image corresponds to multiple distortion images. So, when we preprocess the training dataset (the .txt file which is the label txt), we sort it by object-wise. Just like you say: split the datasets according to different reference images. We don't upload this process code in github.
Thanks for prompt explanation. Sorry that I do not get the idea. Since random shuffle is performed for partition, how does sorting work ?
Sorting is for preprocess label file. (Just for our label file) just like this process. object_data just include the ref image name. And when you load your dis images, you can select them according to the train_name and val_name.
Got it. Thanks for patient reply.
Thanks for your great work. I have some questions about the dataset partition strategy on LIVE, CSIQ, TID2013 and KADID-10k, because this may cause great difference in performance.
As we know, the distortion images in these datasets are synthetic images for a small number of reference images. In other words, the same reference image may have many corresponding distortion ones. To avoid content bias during training/validation, previous works, such as TReS, usually split the datasets according to different reference images.
However, in the script
utils/process.py
, it seems that experiments in the paper simply split the distortion images in 8:2 ratio without considering the reference images. I have done some simple experiments on KADID10k for these two different dataset partition strategy and found that random splitting without considering reference images leads to great improvement for PLCC/SRCC.Therefore, performance with simple random partition may not be reliable because of content overlapping between train/valid images.