SSL92 / hyperIQA

Source code for the CVPR'20 paper "Blindly Assess Image Quality in the Wild Guided by A Self-Adaptive Hyper Network"
MIT License
346 stars 52 forks source link

about label/score normalization #7

Closed ainnn closed 3 years ago

ainnn commented 3 years ago

hey, thanks for your great work. i viewed code and i found that there's no label normalization, e.g. normalizing scores to range [0, 1]. it's ok not to normalize when train and test on the same dataset or datasets with similar range. in the paper, table 3 lists 3 datasets (livec, bid and knoiq) which have different score ranges. is it reasonable to use raw scores? or maybe you have normalized scores? look forward to your reply.

ainnn commented 3 years ago

also, i find that the target network contains 5 fc layers in the code, while 4 fc layers claimed in the paper.

SSL92 commented 3 years ago

Actually, we didn't do this normalization in cross database evaluation. This is because the final criterion SRCC calculates only the correlation ratio between two vectors and thus has no relationship with vector scale. However, it seems to be true that normalizing labels to the same scale in single database evaluation is more reasonable, since we use a fixed lr for training all databases, while it might be better to use different lr for training datasets with different label ranges.

SSL92 commented 3 years ago

Thanks for your kind remind, there indeed exists a little difference about fc numbers from the paper and our code. However, it seems using whether 4 or 5 fc layers doesn't affect the model performance much, this is probably due to the target net has already learned sufficient quality representation in several front layers. You can also change the fc numbers by your own to see if the performance changes accordingly.

ainnn commented 3 years ago

Thanks for your reply. i think it's important to normalize label to get consistent label distribution in cross database evaluation. as for using different lr for training different database, it does not matter because the gradient from L1 loss has nothing to do with the scale. just a restatement of my ideas, and there's no need to answer. thanks for sharing such a great work.

SSL92 commented 3 years ago

Our pleasure ; )