Open Elwarfalli opened 1 year ago
@Elwarfalli You may directly input the gray images without any change and you would get "gray images" with three channels. Then you may simply convert the three-channel images to one-channel ones.
When I applied that where the input is grayscale and I tested my pre-trained model the results were saved in 3 channels of grayscale. I am curious about PSNR; has PSNR computed for one grayscale channel? or how? and how can I change the code to save the results on a one-channel grayscale?
Thank you for your fast reply,
@Elwarfalli PSNR is comupted on Y channel in the default settings. If you want to compute PSNR for comparing models on gray images, the best way is to retrain a model for gray images by setting the channel as 1 for complete fairness. If you just want to see the results of pretrained model on gray images, the most convenient way is to wirte a post-processing script to convert the three-channel images into gray channel like using Opencv BGR2Gray, and then compute PSNR on the one-channel images.
Thank you,
When I set the parameter as follows:
network_g: type: HAT upscale: 3 in_chans: 1 img_size: 64 window_size: 16 compress_ratio: 3 squeeze_factor: 30 conv_scale: 0.01 overlap_ratio: 0.5 img_range: 1. depths: [6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6] embed_dim: 180 num_heads: [6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6] mlp_ratio: 2 upsampler: 'pixelshuffle' resi_connection: '1conv'
I got an error:
output = module(*input, **kwargs)
File "C:\Users\Student.conda\envs\hat\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "e:\hamed\pycode\benchmarks\hat-master\hat\archs\hat_arch.py", line 980, in forward x = self.conv_first(x) File "C:\Users\Student.conda\envs\hat\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(input, **kwargs) File "C:\Users\Student.conda\envs\hat\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "C:\Users\Student.conda\envs\hat\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [180, 1, 3, 3], expected input[1, 3, 16, 16] to have 1 channels, but got 3 channels instead
Any help, please?
Thank you,
@Elwarfalli You need modify the data loader for 1-channel images I/O. Create a custom data loader referring to paired_image_dataset.py.
Based on my understating, BasicSR paired_image_dataset.py papers the dataset. I have my own grayscale dataset training/validation.
I'd like to ask about your great work.
Is it possible to run it on a grayscale dataset? If so what should I change? I changed a number of input channels but it is not working for me.
network structuresnetwork_g:
type: HAT upscale: 3 in_chans: 1
I am looking forward to hearing back from you. Thank you,