Open KevinTsaiii opened 4 years ago
Same questions for me either, hope to see the reply! ; )
In this experiment, there is no need to fine-tune the model during testing.
The SROCCs reported in the Table 2 are the results of about 50 epochs model training.
I'm also wondering in the leave-one-distortion-out cross validation, is the model trained on both TID2013 and KADID10K databases, or trained on each database individually?
@SSL92
The model is trained on TID2013 or KADID10K database individually.
What is the best score of sp for model validation?
Hi Hancheng,
Thanks your great idea!
I'm trying to reproduce your amazing work but found that the code for the leave-one-distortion-out cross validation on TID2013 and KADID10K (Table 2 of the paper) are not included in this GitHub. I have several questions about the implementation details. How was the fine-tuning done for this experiment? How were the SROCCs reported on the paper selected? Are they the highest SROCCs among the training epochs? Or are they chosen at a fixed epoch?
I would appreciate it greatly if you can release the code. Thank you!
Hello, KevinTsaiii I wonder if your problem has been solved? If there is any result, can you share your thinking? thank you very much
Hi Hancheng,
Thanks your great idea!
I'm trying to reproduce your amazing work but found that the code for the leave-one-distortion-out cross validation on TID2013 and KADID10K (Table 2 of the paper) are not included in this GitHub. I have several questions about the implementation details. How was the fine-tuning done for this experiment? How were the SROCCs reported on the paper selected? Are they the highest SROCCs among the training epochs? Or are they chosen at a fixed epoch?
I would appreciate it greatly if you can release the code. Thank you!