proteus1991 / GridDehazeNet

This repo contains the official training and testing codes for our paper: GridDehazeNet: Attention-Based Multi-Scale Network for Image Dehazing.
https://jhc.sjtu.edu.cn/~xiaohongliu/
243 stars 54 forks source link

some issue about upsampling and downsamplin #8

Closed Ikhwansong closed 4 years ago

Ikhwansong commented 4 years ago

for_question

Hi, Nice to meet you.

I have a question why did you construct the up & down sampling block consisting of two parts. To my understanding, Let in_channels = 10, kernel_size = 3, stride = 2 Case 1 (Yours)

self.conv1 = nn.Conv2d(in_channels, in_channels, kernel_size, stride=stride, padding=(kernel_size-1)//2) self.conv2 = nn.Conv2d(in_channels, stride*in_channels, kernel_size, stride=1, padding=(kernel_size - 1) // 2)

Two parts for Downsampling are fused into one, Like this Case 2

self.conv1 = nn.Conv2d(in_channels, stride*in_channels, kernel_size, stride= stride, padding=(kernel_size -1) // 2 )

The number of parameters of case 1, is 9 x 9 x 10 x 10 + 9 x 9 x 10 x 20 But case 2, 9 x 9 x 10 x 20 So, Using the method of case 2 seems reasonable.

Please reply thank you.

proteus1991 commented 4 years ago

Hi,

Thanks for reading my codes. In general, there are many ways to down/up-sampling such as, among others, traditional interpolation (e.g., bilinear, bicubic) and modern ways (pixel-shuffle, transpose Conv). In here, I empirically chose this way. The reason is that just using one convolutional layer for down/up-sampling might degrade the performance. As for the increase of parameters, it is negligible as compared to the whole GridDehazeNet architecture.

Hope my answer could help you.

Best,

Xiaohong