Open Dammi87 opened 7 years ago
I also have the same question: does SegNet actually use a mean value or mean file during its training?
Does it make sense to define the DenseImageData layer (the top layer) in train.prototxt as below, which further takes the mean value into account?
This definition can be loaded successfully by Caffe, and my network is being trained, but I'm not sure of its output at the moment.
layer {
name: "data"
type: "DenseImageData"
top: "data"
top: "label"
dense_image_data_param {
source: "data/segnet/train.txt"
batch_size: 4
shuffle: true
crop_width: 560
crop_height: 425
}
transform_param {
mean_value: 104
mean_value: 117
mean_value: 123
}
}
Or we simply do not need mean value and mean file in SegNet?
Thanks.
Hi Alex and other authors!
Thanks for sharing the code and the amazing work, really loving the idea of saving the maxpooling indexes!
I'm currently trying to implement SegNet (and later, Bayesian Segnet) in Tensorflow as an exercise and I've gotten pretty far. However, I'm getting a bit confused regarding the part that you initialize the weights using the VGG16 convolution layers.
Does this mean that you normalize the images in the same way as VGG16 does ? See here.
In Bayesian-Segnet you use the same setup, except add dropout to the middle 6 layers. Should the weights in Bayesian Segnet also be initialized with VGG16? Doesn't the effect of adding dropout dramatically change the weights such that initializing with VGG16 is redundant?
And I've noticed that you have set the learning rate multiplier for the weights and bias to be 1 and 2 respectively.
param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 }
Is there any particular reason you do this, can you provide me a paper to read on this?
Thanks!
Best regards, Adam