sanechips-multimedia / syenet

SYENet: A Simple Yet Effective Network for Multiple Low-Level Vision Tasks with Real-Time Performance on Mobile Device, in ICCV 2023
Apache License 2.0
42 stars 4 forks source link

How did you evaluate on MAI2022 dataset? #8

Closed m0yoshimura closed 5 months ago

m0yoshimura commented 5 months ago

Hi, thank you so much for sharing the great work.

I have a question about MAI2022 evaluation.

  1. I think we can't get ground truth images of validation data. How did you get your PSNR value?
  2. How did you evaluate inference speed? You bought the same smartphone?
  3. Please tell me the constant parameter C for the score eq.

I want to evaluate in the same way with your since your method is nice.

WeiranGou commented 5 months ago

Thanks for the questions.

First, I think you are referring to the test data because all the participants of the challenge have access to the validation data but we did not present the PSNR of validation data anywhere in the paper. We do not have the input and the ground truth of the test data either. The test data is utilised by the challenge organiser to rank the participating teams. They calculated the PSNR and inference speed and released the results in their report. So the ISP task PSNR in our paper is from their report.

Second, we only evaluated the inference speed of various models for LLE and SR tasks using Qualcomm 8 Gen 1 mobile SoC for comparison. The results of the ISP task including the inference speed are from the report of the challenge.

Third, we did not ask the organiser about the normalizing constant C but since it is called the normalizing constant, we think it is used to normalize the scores linearly. So what value to choose is up to the model scores that you want to present in your table. We chose to use C to linearly scale the scores in our experiment results table to ensure the summation of all the scores in the table to be 100, which we think is a nice range.

myoshimura1234 commented 5 months ago

Thank you for your kind reply.

I now understand you are Multimedia team of MAI2022 challenge! I misunderstood since your paper is presented on ICCV2023. and expect some ways to evaluate on MAI2022 dataset after the end of challenge.