Closed SeaOfOcean closed 7 years ago
Another difference
In finetune/train.prototxt
layer {
name: "conv2_2"
type: "Eltwise"
bottom: "conv2_2/3"
bottom: "conv2_2/input"
top: "conv2_2"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
In train/train.prototxt
layer {
name: "conv2_2"
type: "Eltwise"
bottom: "conv2_2/3"
bottom: "conv2_1"
top: "conv2_2"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
bottom: "conv2_2/input" vs bottom: "conv2_1"
what's the difference? Thanks @sanghoon
@sanghoon Dear sanghoon, I have the same confuse about this
hi @SeaOfOcean you can see #5
@SeaOfOcean, @xiaoxiongli The first difference comes after merging Conv, BN and shift layers. Thank you @zimenglan-sysu-512.
The second difference is merely the matter of representation. "conv2_2/input" is a dummy layer which I added for the ease of prototxt editing, and it's exactly the same with "conv2_1" (since it's the input for conv2_2)
got it, thanks :)
Hi, I notice that in your finetune prototxt, the bias_term in convolution layer is true, while in train prototxt, the value is false.
what's the difference and why you set like this?
Thanks