ZhexinLiang / CLIP-LIT

[ICCV 2023, Oral] Iterative Prompt Learning for Unsupervised Backlit Image Enhancement
https://zhexinliang.github.io/CLIP_LIT_page/
269 stars 23 forks source link

Questions about retraining with comparative methods #7

Closed CBY-9527 closed 10 months ago

CBY-9527 commented 11 months ago

Thank you very much for sharing the code. I have a question. The paper mentions "To further validation, we also provide retrained supervised methods' results in supplementary material. For unsupervised methods, we retrained them on the same training data as our method to ensure that they are evaluated under the same conditions", but in Table 1 and Figures 19-27, the supervised method is not marked as retraining, while the unsupervised method is marked as retraining. So, are the supervision methods in Table 1 directly tested using the model provided by the corresponding code, or are they retrained and tested again? If the comparative methods (including supervised and unsupervised methods) are retrained, can you provide the visual results of these methods so that readers can directly use them for comparison in subsequent research. Thank you so much!

ZhexinLiang commented 11 months ago

Hi, thanks for your interest in our work.

All the supervised methods in Table 1 are directly tested using the model provided by their original code, and some of the names mean the method has different pre-trained versions. For example, SNR-aware-LOLv1 means this model provided by the SNR-aware method was pretrained on LOLv1 dataset. These versions are all provided by the corresponding official original code.

As our paper says, we provide some retrained supervised methods' results in supplementary material, that is, Table 7 and Figure 17-18 in the supplementary material. (Figure 17 is the visual comparison on the BAID test dataset rather than Backlit300 test dataset. Sorry for the typo, I will revise it later.)

ZhexinLiang commented 10 months ago

I have already updated the latest version of our paper with minor revision regarding the typo mentioned above on arXiv.

I will close this issue. If you have any other questions, feel free to reopen this issue or create a new issue.