IIGROUP / MANIQA

[CVPRW 2022] MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment
Apache License 2.0
307 stars 36 forks source link

some question for model architecture #37

Closed WFLiu0327 closed 1 year ago

WFLiu0327 commented 1 year ago

In the MANIQA.py file, def extract_feature(self, save_output): x6 = save_output.outputs[6][:, 1:] x7 = save_output.outputs[7][:, 1:] x8 = save_output.outputs[8][:, 1:] x9 = save_output.outputs[9][:, 1:] you are using the output of 6, 7, 8 and 9 Blocks in VIT, why did you choose the output of these Blocks?

TianheWu commented 1 year ago

At that time, when playing the competition, we tested the output of different stages and found that these stages worked well.

Now, I think these middle stage contain low-level details and high-level semantic information which are the two important factors for IQA.

WFLiu0327 commented 1 year ago

I see. Thank you for your reply.