Open SunSet0864 opened 7 years ago
Are you sure it costs a lot of time? Haven't tried batch norm.
In SSD or parse_net, a layer named normalize is used to scale the response of the low layer, there are many matrix operation in the code of normalize layer such as caffe_cpu_gemm and caffe_cpu_gemv, it has a high time consumption when training or testing, so i wonder does this layer has a high return comparing with other norm layers such as batch_norm or lrn_norm? If the normalize layer is replaced by batch_norm layer, can we get a higher mAP or a faster speed than original? @weiliu89 , thank you for your help!
have you tried it replacing by batch_norm layer?
In SSD or parse_net, a layer named normalize is used to scale the response of the low layer, there are many matrix operation in the code of normalize layer such as caffe_cpu_gemm and caffe_cpu_gemv, it has a high time consumption when training or testing, so i wonder does this layer has a high return comparing with other norm layers such as batch_norm or lrn_norm? If the normalize layer is replaced by batch_norm layer, can we get a higher mAP or a faster speed than original? @weiliu89 , thank you for your help!