huangzehao / caffe-vdsr

A Caffe-based implementation of very deep convolution network for image super-resolution
MIT License
273 stars 134 forks source link

How to implement recurrent conv layers structure? #11

Closed BaiYu0120 closed 8 years ago

BaiYu0120 commented 8 years ago

Hi, do you know how to design the DRCN's recurrent conv layers structure which share weights, use caffe?

huangzehao commented 8 years ago

Hi, check the implementation of siamese network in caffe and you will get it. http://caffe.berkeleyvision.org/gathered/examples/siamese.html

BaiYu0120 commented 8 years ago

Tks, I use the same name of param to share weights between layers. DRCN's last layer is for reconstruction, it says''These weights are learned during training.''. I use the Eltwise layer to combine the 16 predictions from all intermediate layers, but no weights output, is the last layer's weights merged in the filters of hidden layers? Or the Eltwise layer can be set to output the weights?

huangzehao commented 8 years ago

There is not trainable parameter in Eltwise layer, you can add a 1*1 convolution layer after your intermediate result.

BaiYu0120 commented 8 years ago

I have added the 1*1 recons layer after every intermediate result, but where can I get the average weights of the final layer which adds all intermediate result? I think the weights are merged in the filters of hidden layers during training. How do you understand it?

huangzehao commented 8 years ago

The learned 11 parameters are the average weights. R = w_1 * r_1 + w_2 * r_2 + w_3 * r_3...+w_16 \ r_16 I agree with your idea. In my opinion, simple sum can obtain a competitive result. Thus: R = r1 + r2 + r3...+r_16

BaiYu0120 commented 8 years ago

ok, thanks.