huawei-noah / vega

AutoML tools chain
http://www.noahlab.com.hk/opensource/vega/
Other
842 stars 175 forks source link

CycleSR can not be reimplemented #121

Open greatlog opened 3 years ago

greatlog commented 3 years ago

Hi, I cannot get the reported results of CycleSR on NTIRE2017 Track2. The reported PSNR is 27.01 dB, while the reimplemented result is only 22 dB. Can you provide the pretrained weights of all modules in CycleSR?

zhangjiajin commented 3 years ago

@greatlog

Please adjust the configuration. After the modification, the performance can be greater than 25 dB.

    dataset:
        train:
            batch_size: 64
        test:
            batch_size: 64

    trainer:
        n_epoch: 150
greatlog commented 3 years ago

Thanks for your reply. I still cannot reimplement the reported results of CycleSR by myself. And I seem to have found the reason:

the div2k_unpair.py actually is not "unpaired". According to this line, the LR and HR images are get paired by their names.

However, the paper says that the training dataset is totally unpaired. So i cannot reimplement the results from the paper.

zhangjiajin commented 3 years ago

@greatlog

The data needs to be processed as follows: For HR, an image needs to be cropped into multiple sub-images whose pixels are 480 x 480, and for LR, an image needs to be cropped into multiple sub-images whose pixels are 120 x 120. Save the images as files with the same name in the folders specified by HR_dir and LR_dir.

greatlog commented 3 years ago

Thanks. I know how to preprocess the datasets, and I have no doubt that the code can produce PSNR results better than 25 dB. The question is that the code is actually not consistent with your workshop paper. The paper says that the dataset is unpaired and CycleSR is an unsupervised method. While the code tells me that the dataset is paired, which means that CycleSR is actually a supervised method.

SHUHarold commented 3 years ago

Hi, gratlog, Thanks for your attention of our paper. My name is Shuaijun Chen. CycleSR is an unsupervised method. As for the creation of dataset, we use the training set of DIV2K, which contains 800 paired images. To ensure the unsupervised training, we use the first 400 HR images as our HR, the rest 400 LR images as our LR. So, the LR and HR in our framework is unpaired. And we have added the data processing script.

By the way, I think there is one thing worth noting, before joint training of cycleGAN and SRnet, the cycleGAN has to be trained 5 epochs to ensure the generation ability of generator. You can find it in https://github.com/huawei-noah/vega/blob/master/vega/algorithms/data_augmentation/cyclesr/cyclesr_trainer_callback.py#L153 and https://github.com/huawei-noah/vega/blob/977054e12dd3bc1c96bbe35f18d5db4bc82d0522/zeus/networks/pytorch/cyclesrbodys/cyclesr_net.py#L203

greatlog commented 3 years ago

Thanks for your reply and explanation. CycleSR is definitely an excellent method for unpaired SR. I am working on it because I believe it is promising. Yet, I still cannot reimplement the reported results on NTIRE2017 or NTIRE2018 with only unpaired datasets. Instead, the results trained on paired datasets are close to the reported ones. I will double-check what is wrong.

SHUHarold commented 3 years ago

Thanks for your quick reply and your praise. If you have any problem, plz feel free to leave a message.

greatlog commented 3 years ago

One more question. According to https://github.com/huawei-noah/vega/blob/1bba6100ead802697e691403b951e6652a99ccae/examples/data_augmentation/cyclesr/cyclesr.yml#L19 and https://github.com/huawei-noah/vega/blob/1bba6100ead802697e691403b951e6652a99ccae/examples/data_augmentation/cyclesr/cyclesr.yml#L28, the training batch_size is 64, and imgs_per_gpu is 4. So do you use 16 GPUs to train CycleSR?