FanChiMao / SUNet

SUNet: Swin Transformer with UNet for Image Denoising
202 stars 23 forks source link

Modifying for gray-scale image #7

Closed tsuijenk closed 2 years ago

tsuijenk commented 2 years ago

Hello,

Great work! I was just wondering if there's any way we can modify the model so that it works for gray-scale image.

Thank you.

FanChiMao commented 2 years ago

Because our SUNet is trained by RGB images, we don't provide the pre-trained model for grayscale denoising.

However, if you want to retrain a grayscale model, you can simply change from out_chans=3 to out_chans=1 in model/SUNet.py.

https://github.com/FanChiMao/SUNet/blob/30eeb03395a9e275a5700914dfec0b71d12eb613/model/SUNet.py#L12

After modifying this, and if you already have the grayscale pair training data, you can directly run the training code (train.py) without adjusting the model architecture.

Notably, in model/SUNet.py, if the dimension size (x.size()[1]) is 1 (grayscale images) it will "repeat" them to 3 dimensions as shown below.

https://github.com/FanChiMao/SUNet/blob/30eeb03395a9e275a5700914dfec0b71d12eb613/model/SUNet.py#L26-L30

So, we don't need to modify the in_chans.

Hope this can help you!

nikhilCad commented 2 years ago

How can we train the model in case of hyperspectral datasets like these http://lesun.weebly.com/hyperspectral-data-set.html ? They have >3 channels

FanChiMao commented 2 years ago

Hello.

Because our SUNet is built for image denoising task which the channel of input and output images is usually set to 3,

we don't add the parameters in training.yaml.

However, you can directly modify the in_chans and corresponding output channels out_chans for image segmentation in here.