Open wottpal opened 5 years ago
Hi wottpal, that's very interesting topic. Sorry for my late response. I was having a business trip.
For the first look, I felt it's difficult since the image contains lots of trees which has high frequency component. Recovering high frequency data from less samples is a bit hard for usual network.
I have two ideas.
1) add local residual block in the network so that the model can process with residual's residual image.
2) add something like object recognition module so that the model can identify and change the recovering weights depends on the target (trees/roads/cars...). I mean my model only have CNN but maybe we can add local FCN so that they can distinguish targets?
So basically, I think it would be better to modify networks structure. Otherwise, you can try making the model deeper or wider since I feel your data amount is enough.
Best,
Thanks @jiny2001 for your ideas :)
add local residual block in the network so that the model can process with residual's residual image.
I would like to try that but unfortunately I'm more of a Keras-guy ans the pure TensorFlow syntax is quite overwhelming for me. Could you give me a hint on what to add where?
Otherwise, you can try making the model deeper or wider since I feel your data amount is enough.
So maybe 18 layers
and 256 filters
but with keeping the same min_filters
and filters_decay_gamma
🤷♂️
Thank you very much!
Regards from Germany, Dennis
Yea, sometimes it works when you just use bigger filters and layers. But sometimes not. I recommend you to use default min_filters and filters_decay_gamma but still you can adjust it.
For local residual block, code would be something like that. Just add input tensor after 2 CNNs.
input = self.H[-1]
self.build_conv("CNN1”, input, self.cnn_size, input_feature_num,
output_feature_num, use_bias=True, activator=self.activator,
use_batch_norm=self.batch_norm, dropout_rate=self.dropout_rate)
self.build_conv("CNN2”, self.H[-1], self.cnn_size, input_feature_num,
output_feature_num, use_bias=True, activator=self.activator,
use_batch_norm=self.batch_norm, dropout_rate=self.dropout_rate)
self.H[-1] = self.H[-1] + input
I'll try that on this weekend if I can get some time.
Hey there, I really enjoy your network/implementation but have some questions about improving my results :) As you know this baby best it would interest which parameters I should try to change to improve my PSNR. My first run was completely with default parameters (except
scale=4
) and the performance seems to stagnate after 30 epochs or so (see graph below).My dataset consists of 800 training and 200 test samples. I've also augmented the training images 4-times and did the y-convert before training. I'm training the network on satellite imagery which is much more specific and less diverse than a e.g. DIV2K, I guess. Examples: