sanghoon / pva-faster-rcnn

Demo code for PVANet
https://arxiv.org/abs/1611.08588
Other
651 stars 241 forks source link

Difference between Finetune prototxt and train prototxt #11

Closed SeaOfOcean closed 7 years ago

SeaOfOcean commented 7 years ago

Hi, I notice that in your finetune prototxt, the bias_term in convolution layer is true, while in train prototxt, the value is false.

what's the difference and why you set like this?

Thanks

SeaOfOcean commented 7 years ago

Another difference

In finetune/train.prototxt

layer {
  name: "conv2_2"
  type: "Eltwise"
  bottom: "conv2_2/3"
  bottom: "conv2_2/input"
  top: "conv2_2"
  eltwise_param {
    operation: SUM
    coeff: 1
    coeff: 1
  }
}

In train/train.prototxt

layer {
  name: "conv2_2"
  type: "Eltwise"
  bottom: "conv2_2/3"
  bottom: "conv2_1"
  top: "conv2_2"
  eltwise_param {
    operation: SUM
    coeff: 1
    coeff: 1
  }
}

bottom: "conv2_2/input" vs bottom: "conv2_1"

what's the difference? Thanks @sanghoon

xiaoxiongli commented 7 years ago

@sanghoon Dear sanghoon, I have the same confuse about this

zimenglan-sysu-512 commented 7 years ago

hi @SeaOfOcean you can see #5

sanghoon commented 7 years ago

@SeaOfOcean, @xiaoxiongli The first difference comes after merging Conv, BN and shift layers. Thank you @zimenglan-sysu-512.

The second difference is merely the matter of representation. "conv2_2/input" is a dummy layer which I added for the ease of prototxt editing, and it's exactly the same with "conv2_1" (since it's the input for conv2_2)

SeaOfOcean commented 7 years ago

got it, thanks :)