Closed BaiYu0120 closed 8 years ago
Hi, check the implementation of siamese network in caffe and you will get it. http://caffe.berkeleyvision.org/gathered/examples/siamese.html
Tks, I use the same name of param to share weights between layers. DRCN's last layer is for reconstruction, it says''These weights are learned during training.''. I use the Eltwise layer to combine the 16 predictions from all intermediate layers, but no weights output, is the last layer's weights merged in the filters of hidden layers? Or the Eltwise layer can be set to output the weights?
There is not trainable parameter in Eltwise layer, you can add a 1*1 convolution layer after your intermediate result.
I have added the 1*1 recons layer after every intermediate result, but where can I get the average weights of the final layer which adds all intermediate result? I think the weights are merged in the filters of hidden layers during training. How do you understand it?
The learned 11 parameters are the average weights. R = w_1 * r_1 + w_2 * r_2 + w_3 * r_3...+w_16 \ r_16 I agree with your idea. In my opinion, simple sum can obtain a competitive result. Thus: R = r1 + r2 + r3...+r_16
ok, thanks.
Hi, do you know how to design the DRCN's recurrent conv layers structure which share weights, use caffe?