Closed ciwei123 closed 3 years ago
@ciwei123 Please follow README
, you need to specify --resize
.
@ciwei123 Please follow
README
, you need to specify--resize
.
@lidq92 Thanks for your reply. I get the same result with yours.when I add the command --resize
. And I find that the method of read the image and preprocess should be same with the train phase.
@lidq92 I train my model by changing some layers.My models are obtained in the same setting with you.
498*664
), and test my model on TID2013 dataset, when I resize the image to 500*500
is better than resize 498*664
. I am confused about the result,could you help me? Thank you very much.@ciwei123
498x664
to be better since the image resolution in TID2013 is 384x512
. Your "unexpected" results may come from your model training process. And if both two results are bad, then the distribution discrepancy is the major concern for your question (You need to improve the generalization ability of your model).@lidq92 Thanks for your very quick reply.
There is a trade-off between resizing test images to adapt the training inputs or not resizing them to avoid distortion introduced by resizing.
You means that the settings during training and testing may be different,depending on the resolution of the test data set?
my result on TID2013: raw is no resize
500*500/raw/664*498
0.500/0.501/0.489
0.502/0.502/0.493
0.500/0.498/0.494
0.538/0.507/0.502
I think 664*498
should be better, but the 500*500
is better, so I am very confused.
3.My training set is only KonIQ-10k(7058 images),and resize to 664*498. I think only test model on CLIVE and KonIQ-10k is very limited, I want to show the effectiveness of the model on more data sets,so I test the model on TID2013 and LIVE dataset, and I don't finetune the model on TID2013、 LIVE and CLIVE dataset. Could you give me any good suggestions for improving generalization?
@ciwei123 As I have seen, both two results were bad, and the distribution discrepancy is a major concern.
What is the result of your trained MobileNet model on KonIQ-10k test set? If it is not good, you may re-train the model with other optimization hyper-parameters ( I guess the default values of hyper-parameters may be OK).
You may refer to mixed dataset training or domain adaptation for better overall performance. Good luck.
@lidq92 The result of my trained MobileNet model on KonIQ-10k test set is SROCC about 0.900, so the hyper-parameters may be OK.
And different datasets have different ranges of mos,how to deal with this? Is it because it has been normalized(by using norm-in-norm), so there is no need to consider this? Is it okay when there are images from different data sets in a batch? Thank you very much!
@lidq92 Thanks for your sharing. I test the model
p1q2plus0.1variant.pth
you provided, the result is:The SROCC is 0.797, is inconsistent with yours(0.834).