Open Atmyre opened 10 months ago
Hi, thanks for your interest in our paper.
Sorry for the late response. I am working on my next project. Feel free to ask in this issue if you have more questions.
Hello again,
May I also ask why you used DIV2K_384 images as gt for training, not BAID_380/resize_gt ?
Hi @Atmyre,
Our method operates an unsupervised manner, which means it does not require any paired data and offers a better generalization ability rather than overfitting to the specific dataset.
Shuffling the BAID_380/resize_gt as GT can also achieve "unpaired" data. However, since the BAID_380/resize_gt are manually retouched and may not represent the distribution of real images accurately, we choose DIV2K dataset instead. You can also try to use other well-lit image dataset as GT if needed.
If you still choose to use BAID_380/resize_gt as GT, it might improve performance on the BAID test dataset but could worsen performance on the Backlit300 test dataset compared to our current checkpoint, as the model might learn to overly adapt to the BAID dataset.
To emphasize, one of our motivations is to propose a framework for training models in situations where obtaining real ground truth data is not feasible.
Feel free to discuss with me if you have any other questions.
Thanks for your help!
Hello,
I have the following questions:
I would really appreciate your answers