Closed VincentChong123 closed 7 years ago
Hi @weishengchong,
I'm afraid that I can't answer the exact difference between those two options right now.
Talking about training time, when I finetune a network after I eliminate BN, scale layers (by merging, if possible), the iterations becomes 25% faster.
IMO, the impact of removing BN layers won't be that significant during test time.
Hi @sanghoon,
Thanks for sharing.
For our googlenet_bn, we are trying to merge bn into conv and bias, then share blob memory between conv layers and merge CBR layers.
Very useful information! Thanks the discussion. @weishengchong What is CBR layer?
Hi @quietsmile, for merging CBR (convolution, bias, relu), refer to Nvidia GIE figure 3~5.
hi @weishengchong can you share some information about how to use the opt model after GIE? especially for detection task. thanks.
@sanghoon can you share some experience how to merge BN and Scale Layer into Conv Layer?
@sanghoon Dear sanghoon, I have the same confuse about how to merge BN and Scale Layer into Conv Layer, I read your caffe code modifies, and I find that your Conv layers code have no modifies comparing to the main caffe branch.
@zimenglan-sisu-512 @xiaoxiongli
It's just a simple math. Given the parameters for each layer as:
conv layer: conv_weight, conv_bias bn layer: bn_mean, bn_variance, num_bn_samples scale layer: scale_weight, scale_bias
Let us define a vector 'alpha' of scale factors for conv filters:
alpha = scale_weight / sqrt(bn_variance / num_bn_samples + eps)
If we set conv_bias and conv_weight as:
Then we can get the same result compared to that with the original network by setting bn and scale parameters as:
bn_mean[...] = 0 bn_variance[...] = 1 num_bn_samples = 1
scale_weight[...] = 1 scale_bias[...] = 0
thus we can simply remove bn and scale layers.
The code is not opened, but you can easily implement a script to do this.
@kyehyeon thank you very much! I got it...
Now in the github: For example_train_384\train.prototxt --> original prototxt, so I can train without modify conv layer For example_finetune\train.prototxt --> merge BN and Scale Layer into Conv Layer, so I can NOT train unless I modify the Caffe's Conv Layer according to what you have say above.
Right? ^_^
@zimenglan-sisu-512
Do you mean how to use gie for detection task?
On Oct 12, 2016 4:03 PM, "zimenglan" notifications@github.com wrote:
hi @weishengchong https://github.com/weishengchong can you share some information about how to use the opt model after GIE? especially for detection task. thanks.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/sanghoon/pva-faster-rcnn/issues/5#issuecomment-253145824, or mute the thread https://github.com/notifications/unsubscribe-auth/AQyyHFtjwbsk_9G4OTdYrCbFAIAcLP4qks5qzJRJgaJpZM4KIr_K .
Hi @sanghoon,
After merging for 25% speed up, any side effect on training accuracy?
On Oct 10, 2016 10:06 PM, "Sanghoon Hong" notifications@github.com wrote:
Hi @weishengchong https://github.com/weishengchong,
I'm afraid that I can't answer the exact difference between those two options right now.
Talking about training time, when I finetune a network after I eliminate BN, scale layers (by merging, if possible), the iterations becomes 25% faster.
IMO, the impact of removing BN layers won't be that significant during test time.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/sanghoon/pva-faster-rcnn/issues/5#issuecomment-252630980, or mute the thread https://github.com/notifications/unsubscribe-auth/AQyyHMAQrRyhaL4vJVzqbrHumWPMqTaTks5qykZ8gaJpZM4KIr_K .
@weishengchong yes, i have tried, but failed.
Hi @zimenglan-sysu-512,
I haven't try it. What was your GIE error message?
FYI, at GTC Taipei last month, page below was introduced to TensorRT (=GIE) users. https://github.com/dusty-nv/jetson-inference
Have you tried reproducing the tutorial results?
@xiaoxiongli You can still fine-tune the network. You just can't batch normalize training data. However, in the Faster R-CNN training, we hasn't used batch normalization at all. The result will be almost the same.
@weishengchong I haven't compared the two cases. I guess there will be no harmful effect. On the contrary, merging scale-bias layer may improve the resulted accuracy. It's something I'm planing to take a shot for.
@weishengchong yes i have followed this instructions, but since i want to use if based on py-faster-rcnn, so i don't know how to do it.
@weishengchong @quietsmile @zimenglan-sysu-512 @xiaoxiongli Have you guys implemented the code to merge the BN layer into the Conv layer?
@sanghoon Could you release the code for merging BN layers into Conv layers and the scripts to generate the prototxt of networks?
@weishengchong,@sanghoon Here are my comparison results (python, windows,GTX 1080)
@hengck23 Dear hengck23, i see your inference result -- 15ms is really amazing! Which way you get this result: using GIE or using your own modifiy?
@ xiaoxiongli I use the caffe code from this website. I use cuda 8 /cudnn5.1 I havent use GIE/tensorRT yet (but is doing tensorRT this week).
@hengck23 which website..? Do you mean pvanet's caffe branch? as i know, the pvanet's caffe do not implement the code of merging BN/Scale Layer to Convolution Layer.
@ xiaoxiongli there is no code for merging. But both the original and merge models are provided.
@ xiaoxiongli As a reference i also provide zfnet speed. I retrain zfnet from ross's faster-rcnn using pvanet-faster-rcnn here.
@hengck23 Dear heagck23, I know that the merge models are provided, but when i carefully read the pvanet's caffe branch code, I find that the Conv layers code have no modifies comparing to the main caffe branch. Do you mean the pvanet's caffe branch code already merge BN/Scale Layer to Convolution Layer? but i can not find where it is...
So if i want to reproduce your 15ms result, i need implement the "merge BN/Scale Layer to Convolution Layer" code by myself, am i right?
@ xiaoxiongli oringinal prototxt: conv -->bn--> relu --> .... after merginging, test prototxt: conv --> relu --> ....
The conv layers implementation are the same, i.e. same source code. But the parameter values changes, i.e. the caffemodel file change.
To reproduce 15ms result, just use the new caffemodel file: test.pt & test_690K.model
@hengck23 Dear hengck23, where can i find the new caffemodel file "test.pt & test_690K.model" that you mentioned above? Would you Please help..
And Which "train.pt" file you used to re-train? ^_^
@ xiaoxiongli please refer to: https://github.com/sanghoon/pva-faster-rcnn/blob/master/models/pvanet/download_lite_models.sh https://github.com/sanghoon/pva-faster-rcnn/tree/master/models/pvanet/lite
models/pvanet/lite/test.model models/pvanet/lite/original.model test.pt original.pt
These are the 2 models that i obtain 15 ms and 35 ms respectively.
For the zfnet, you have to modify from the original one.
Thanks @sanghoon for sharing this framework with the community.
I set up PVA-Net on my TX1 and ran the lite version successfully. Out of the box, the net forward pass takes 243 ms. However, when you run TX1 at max performance, the net forward pass takes 184 ms. You get a ~60 ms speed improvement just by running your TX1 at max performance.
@sanghoon Dear sanghoon, you said that after merge the BN and Scale layer to Conv layer, the training iterations becomes 25% faster. How about the inference time?
@hengck23 @sanghoon
using full/test.model: Mean AP = 0.8385, 92ms in K40 using full/original.model: Mean AP = 0.8385(same as above), 110ms in K40
@ xiaoxiongli https://github.com/e-lab/torch-toolbox/blob/master/BN-absorber/BN-absorber.lua
Batch normalization applies linear transformation to input in evaluation phase. It can be absorbed in following convolution layer by manipulating its weights and biases.
@ xiaoxiongli https://github.com/terrychenism/NeuralNetTests/blob/master/caffe_utils/gen_bn_inference_v2.py https://github.com/terrychenism/NeuralNetTests/blob/master/caffe_utils/gen_bn_inference.py
#Absorb the BN parameters
weights = caffe.Net(args.model, args.weights, caffe.TEST)
for i, layer in enumerate(model.layer):
if layer.name not in to_be_absorbed: continue
scale, bias, mean, var = [p.data.ravel() for p in weights.params[layer.name]]
eps = 1e-5
invstd = 1./np.sqrt( var + eps )
invstd = invstd*scale
for j in xrange(i - 1, -1, -1):
bottom_layer = model.layer[j]
if layer.bottom[0] in bottom_layer.top:
W, b = weights.params[bottom_layer.name]
num = W.data.shape[0]
if bottom_layer.type == 'Convolution':
W.data[...] = (W.data * invstd.reshape(num,1, 1,1))
b.data[...] = (b.data[...] - mean) * invstd + bias
@hengck23 ,thanks for your kind help.I only find three param under the BatchNorm layer in original.pt. The code you mentioned need four param,"scale, bias, mean, var",how could i solver the problem. ` layer { name: "conv1/bn" type: "BatchNorm" bottom: "conv1" top: "conv1" param { lr_mult: 0 decay_mult: 0 }#scale param { lr_mult: 0 decay_mult: 0 } #shift/bias param { lr_mult: 0 decay_mult: 0 } #global mean
batch_norm_param { use_global_stats: true } }
`
@kyehyeon Dear kyehyeon, in your reply, you said that:
conv_bias = conv_bias * alpha + (scale_bias - (bn_mean / num_bn_samples) * alpha)
and i wonder how can you inference above formula, in the Batch Normalize's paper, the author said that:
So, what i get is: conv_bias = (scale_bias - (bn_mean / num_bn_samples) * alpha), How Can we get the first item(conv_bias * alpha)?
but in your reply and the hengck23's code above: @sanghoon @hengck23
if bottom_layer.type == 'Convolution':
W.data[...] = (W.data * invstd.reshape(num,1, 1,1))
**b.data[...] = (b.data[...] - mean) * invstd + bias**
i do not know what's wrong... I feel so confuse, please help, thank you very much ^_^
@kyehyeon @ xiaoxiongli Note that:
What kyehyeon says above is correct. Please modify the python code based on his comments. If you look at "batch_norm_layer.cpp"
const Dtype scale_factor = this->blobs_[2]->cpu_data()[0] == 0 ?
0 : 1 / this->blobs_[2]->cpu_data()[0];
caffe_cpu_scale(variance_.count(), scale_factor,
this->blobs_[0]->cpu_data(), mean_.mutable_cpu_data());
caffe_cpu_scale(variance_.count(), scale_factor,
this->blobs_[1]->cpu_data(), variance_.mutable_cpu_data());
I deduce: layer { name: "conv1/bn" type: "BatchNorm" bottom: "conv1" top: "conv1" param { lr_mult: 0 decay_mult: 0 } #mean param { lr_mult: 0 decay_mult: 0 } #var param { lr_mult: 0 decay_mult: 0 } #scale batch_norm_param { use_global_stats: true } }
@hengck23 dear hengck23, i agree with you. ^_^ , but what i care about is How can we deduce the formula: conv_bias = conv_bias * alpha + (scale_bias - (bn_mean / num_bn_samples) * alpha), especially for the first item.
Hi @hengck23 @xiaoxiongli , The params in BatchNorm layer contains the following data, repectively:
The average mean is computed by (mean) over (normalization factor). I'm working on writing a short script for merging Bn layers
Dear @hengck23 @sanghoon:
int below code: https://github.com/terrychenism/NeuralNetTests/blob/master/caffe_utils/gen_bn_inference_v2.py
scale, bias, mean, var = [p.data.ravel() for p in weights.params[layer.name]]
I know this code need some modifies, and i can get the scale and bias from the Caffe's Scale Layer.
but in the Caffe's Batch normalize layer, I can get 3 parameters: mean, var, and the moving_average_fraction , my question is How to use the parameter moving_average_fraction during merge BN/Scale Layer to Conv layer(Do Absorb the BN parameters)? just ignore this parameter?
Hi @hengck23 @xiaoxiongli @swearos
I've committed a simple script to merge 'Conv-BN-Scale' layers into a single Conv layer. Please checkout 39570aab8c6513f0e76e5ab5dba8dfbf63e9c68c.
Please note that it seemed work correctly. However, I haven't tested it thoroughly. I'd appreciate it if you could give your feedback on this.
@sanghoon @hengck23 @swearos Dear sanghong: it is very kind of you, your script seems work fine. thank you!
and i test the model/pvanet/full model:
I use GPU K40: before your script: 110ms
after your script: without cudnn: 93ms with cudnn: 91ms (1x1 convolution layer use caffe engine)
thank you!^_^
@xiaoxiongli have you tested the performance before and after that?
@zimenglan-sysu-512 mAP is same, before:110ms, after: 93ms
@sanghoon Thank you very much!
@xiaoxiongli Hi, I also implement the my conv+bn+scale merge code. The inference speed really increase, but not that significant fast like yours! which is about 16% faster than the previous one. 63ms -> 53ms. And the network looks like google inception v2.
@sanghoon @hengck23 @xiaoxiongli @kyehyeon Hi, I implement the conv+bn+scale merge code to own model. The inference speed really increase, but the output has significant shift ! Anyone can give me some suggestions ? Thanks !
By the way, I also try to use the parameters extracted by command net.params[“layer name"] in 00_classification (in caffe example) to imitate the forward of the batchnorm layer. I use the net.params[“layer name”] to extract the bn_mean, bn_variance, num_bn_samples (moving average fraction) of batchnorm layer and use the following formula to get the output but the output is different to the output extracted by command net.blob[“conv”] (output after batch)
( conv_out - bn_mean / num_bn_samples ) / sqrt(bn_variance / num_bn_samples)
hi @sanghoon i use your script to merge model, but i find that the output of merged one does not match original one.
hi @maxenceliu can you share your script?
thanks.
After running ./tools/gen_merged_model.py , it executed correctly but the output model has as a result detections that make no sense! What did go wrong? Before that detections were fine
hi @sanghoon ,
i find that if i change np.finfo(np.double).eps
to 1e-5
which is the default value in BN layer, i can get the right results.
thanks.
anyone can provide some complete example code for tensorflow about how to merge conv layer and bn layer to one conv layer
?
anyone can provide some complete example code for tensorflow about
how to merge conv layer and bn layer to one conv layer
?
do you have some example code for tensorflow about how to merge CONV and BN
Asssuming PAVNET paper table 2 reported FPS for conv layers merged with BN, scaling/shifting/RELU, glad if community can share FPS or speed-up ratio for PAVNET with and without above merging. Thank you very much.