Open zhangxucan123 opened 2 years ago
Hi, thank you for your comment. Could you check whether proper hyperparameters of CycleGAN were founded before applying ISCL? generative models require extensive experiments to set suitable hyperparameters due to learning complexity for each dataset. As similar to a limitation of GAN, discriminators may converge too fast compared with generators. This concern is one of the limitations in the manuscript. I recommend to check that CycleGAN works well to your dataset. After that, please try to utilize self-residual learning.
Hello, because I saw that your paper also used the training of 2016 NIH-AAPM-Mayo Clinc Low Dose CT Grand Challenge dataset. I am doing the training of this dataset, and I hope to take your work as a comparative experiment. I wonder if you could provide the hyperparameter Settings for training this dataset?
Hi. We used same hyper parameters in CT and EM. Could you check the preprocessing code for CT dataset? Or, can you try to utilize multiple GPUs for large batch size? becaue we employed large batch size as many as it avaliables with 4 GPUs (maybe RTX 2080ti). I'm sorry for the absence of details for experiemental setting. I'll provide the exact hyper parameters after double checking.
If you can provide hyperparameters, it would be really appreciated. The processing preprocessing of the dataset should be fine, it performs well on other methods. batchsize I set to 64, using a single machine with a single card (RTX3090).
Hi, I checked that CT experiments work well in as follows environment:
# of Iterations and epochs: 400, 10 Batch size: 256 in 4 GPUs Learning rate: 1e-4 to 1e-7 linear decay The other hyper-parameters are the same as the public code.
I have another concern about the difference in preprocessing under the display window. Chest and abdominal images were normalized under different Hounsfield Unit (HU). Also, we normalized the input images to the [-1,1] range. I recommend that try to find proper parameters in your experiemental setting for CycleGAN. After that you can add ISCL training with same parameters.
Can you provide the training checkpoint for Mayo? Due to the limitation of the experimental equipment, there is no way to use more than one card (3090), so I can't get the results of the paper.
Hi, we found that the model is easily overfitted during training with the mayo dataset, and the psnr keeps decreasing during validation, what is going on?