DeniJsonC / WaveNet

[PG 2023] WaveNet: Wave-Aware Image Enhancement
Other
78 stars 14 forks source link

performance #5

Closed Arusa1 closed 7 months ago

Arusa1 commented 7 months ago

Thanks for your interesting work!

I just copy the network code (WaveNet-S) to train on LOLv1 dataset with my own training strategy, without using the training strategy you provide in this repository. I got a PSNR of about 23 and SSIM of about 0.855. The SSIM is quite close to you results reported in the paper, while the PSNR is much lower (about 1.5db). Is it the training strategy or is there something else going on?

Meanwhile, I run my same code on LOLv2-real dataset. The obtained result was far below the sota results, which was quite different from the performance of WaveNet on LOLv1. As you did not provide results on LOLv2 dataset in the paper, I just wonder that, have you trained and tested on LOLv2 dataset (real and synthetic)? If so, what were the results?

DeniJsonC commented 7 months ago

Thanks for your interesting work!

I just copy the network code (WaveNet-S) to train on LOLv1 dataset with my own training strategy, without using the training strategy you provide in this repository. I got a PSNR of about 23 and SSIM of about 0.855. The SSIM is quite close to you results reported in the paper, while the PSNR is much lower (about 1.5db). Is it the training strategy or is there something else going on?

Meanwhile, I run my same code on LOLv2-real dataset. The obtained result was far below the sota results, which was quite different from the performance of WaveNet on LOLv1. As you did not provide results on LOLv2 dataset in the paper, I just wonder that, have you trained and tested on LOLv2 dataset (real and synthetic)? If so, what were the results?

Hi Arusa1. we are glad you are interested in our work. There are some factors that may affect the results, such as OS version, running environment, seed setting, GPU, batch size, training image size and the training strategy. Maybe you can train our network with a bigger input size or smaller learning rate. Our training strategy is for training on a single 3090 GPU with full memory. As for LOLv2-real, we don't train our network on this dataset considering it is like a LOLv1 Plus version with a similar data style. But we straightly tested our work on VE-LOL-real whose testing dataset is the same as the LOLv2. The bad results on LOLv2-real you have trained (or you can set bigger training epochs), we will figure out what caused the huge differences. Thanks for your feedback again!