tyshiwo / DRRN_CVPR17

Code for our CVPR'17 paper "Image Super-Resolution via Deep Recursive Residual Network"
226 stars 82 forks source link

I don't know how do you implement the reuse of the parameter. #10

Closed EricKani closed 6 years ago

EricKani commented 6 years ago

Hi~

After reading the file of implementation of your model ( .prototxt) . I don't understand the implementation of reusing the parameter.

thanks a lot for your reply.

tyshiwo commented 6 years ago

Hi,

In caffe, you can name the weight or bias of each conv layer. If you want to share the weights among different conv layers, just set the same name when defining those conv layers.

In caffe's prototxt, you can find: layer { name: "conv1_1a" type: "Convolution" bottom: "bn_conv1_1a" top: "conv1_1a" param { name: "RB1_wa" lr_mult: 1.000000 } param { name: "RB1_ba" lr_mult: 0.100000 } convolution_param { num_output: 128 kernel_size: 3 stride: 1 pad: 1 weight_filler { type: "msra" } bias_filler { type: "constant" value: 0 } } } in which there exists 'name' in 'param'.