eric612 / MobileNet-YOLO

A caffe implementation of MobileNet-YOLO detection network
Other
865 stars 442 forks source link

Pruning code for Mobilnet-yolov3_lite #167

Open RamatovInomjon opened 5 years ago

RamatovInomjon commented 5 years ago

Hi , thanks for the great job! I have trained model using mobilenet_yolov3_lite version, model performance is good, but model size is a little big (~26Mb) and speed slower. I wanted know if there is a way prune existing model?, appreciate any advice!

Thanks in advance !

Yang507 commented 5 years ago

Hello, I really appreciate your work. I used MobileNetV2_yoloV3_lite version, i saw your prune prototxt and i wanted to know how to implement the process of pruning? I hope to receive your reply. Thanks

eric612 commented 5 years ago

Author github : https://github.com/lusenkong/Caffemodel_Compress

Code was put here : https://github.com/eric612/MobileNet-YOLO/blob/master/src/caffe/Pruner.cpp

And entry points was here : https://github.com/eric612/MobileNet-YOLO/blob/master/tools/channel_pruner.cpp

Modify parameter here (default m2-yolov3) : https://github.com/eric612/MobileNet-YOLO/blob/master/sys_test_config.xml

Usage :

cd caffe_root
./tools/channel_pruner

You many need retrain models after pruning

RamatovInomjon commented 5 years ago

Author github : https://github.com/lusenkong/Caffemodel_Compress

Code was put here : https://github.com/eric612/MobileNet-YOLO/blob/master/src/caffe/Pruner.cpp

And entry points was here : https://github.com/eric612/MobileNet-YOLO/blob/master/tools/channel_pruner.cpp

Modify parameter here (default m2-yolov3) : https://github.com/eric612/MobileNet-YOLO/blob/master/sys_test_config.xml

Usage :

cd caffe_root
./tools/channel_pruner

You many need retrain models after pruning

Thanks!!!

Yang507 commented 5 years ago

Thanks for your reply! I don't understand one thing about the bias of prune. I used your code to pruned the model, but i cannot get the edited bias. Eg:

     train_prototxt:                                          train_prune.prototxt:
        biases: 20                                                   biases: 18
        biases: 37                                                   biases: 34
        biases: 49                                                   biases: 45
        biases: 94                                                   biases: 84
        biases: 73                                                   biases: 65
        biases: 201                                                  biases: 181
        biases: 143                                                  biases: 141
        biases: 265                                                  biases: 111
        biases: 153                                                  biases: 129
        biases: 121                                                  biases: 241
        biases: 280                                                  biases: 254
        biases: 279                                                  biases: 254
eric612 commented 5 years ago

Thanks for your kindly remind , I forgot I change the anchors in my local repo , I already update the biases , but I need a few days to retrain models with modified biases , currently I uploaded weights was wrong biases

Yang507 commented 5 years ago

Ok! I'm training four clases by your prototxt, now i used prune code to get the prune_deploy.prototxt and found the num_output' size is smaller than the group' size in the same layer, eg:

 convolution_param {
             num_output: 346
             bias_term: false
             pad: 1
             kernel_size: 3
             group: 384
             stride: 1
             weight_filler {
             type: "msra"
             }
             dilation: 1
           }

I hope to receive your reply. Thanks

eric612 commented 5 years ago

Yes , because the code can't automatically modify the group number , but it could not make error from original author's model , he use the different layer "ConvolutionDepthwise", which do not read the group parameter , currently , I modified it manually .

Yang507 commented 5 years ago

I added a few lines of code after 580th line, i had compared my prune_deploy.prototxt with yours, the group and the num_output is same as yours.

      bool overwrite = false;   # add_code
      while (getline(fin_in, str)){
    if (str.find("prob") != -1){
        final_flag = true;
    }
    if (final_flag == true){
        fin_out << str << '\n';
        continue;
    }
    int index = -1;
    if (str.find(str1) != -1){
        for (auto& r : convNeedRewriteOnPrototxt){
            string s = '"' + r.first + '"';
            index = str.find(s);
            if (index != -1){
                int num = r.second.second;
                int cut = r.second.first*r.second.second;
                prunedNum = num - cut;
                nor_flag = true;
                break;
            }
        }
    }
    if (str.find(str2) != -1){
        if (!nor_flag){
            fin_out << str << '\n';

        }
        else{
            fin_out << "    num_output: " + to_string(prunedNum) << '\n';
            nor_flag = false;
            overwrite = true;   #add_code
        }
    }
    else{
                  #add_begin
              if(str.find("group:") != -1 and overwrite)
                {
                    std::string previous = str.substr(0, str.find(":")+2);
                    std::string group_num = str.substr(str.find(":")+2);
                    int group_num_value = stoi(group_num);
                    if(group_num_value > prunedNum)
                    {
                        str = previous + to_string(prunedNum);
                    }
                    overwrite = false;
                }
                #add_end
        fin_out << str << '\n';
    }
}
eric612 commented 5 years ago

Thanks for your contribution

Yang507 commented 5 years ago

Hello, I'm bothering you again. I want to change the biases in the prototxt and I use the code from https://github.com/lars76/kmeans-anchor-boxes to get the box, but i still confuse about your bias. How can I change the bias if I want to use my dataset to train better ?

Thanks.

eric612 commented 5 years ago

You can see this issue : https://github.com/eric612/MobileNet-YOLO/issues/92

shenghsiaowong commented 5 years ago

Author github : https://github.com/lusenkong/Caffemodel_Compress

Code was put here : https://github.com/eric612/MobileNet-YOLO/blob/master/src/caffe/Pruner.cpp

And entry points was here : https://github.com/eric612/MobileNet-YOLO/blob/master/tools/channel_pruner.cpp

Modify parameter here (default m2-yolov3) : https://github.com/eric612/MobileNet-YOLO/blob/master/sys_test_config.xml

Usage :

cd caffe_root
./tools/channel_pruner

You many need retrain models after pruning

你好,剪枝的原理是参考那片文章的

zpge commented 4 years ago

I just pulled the docker and run everything in the docker, but I failed to find channel_pruner in tools/. There is only a file named channel_pruner.cpp. Does it mean I need to build the repo in the docker?

eric612 commented 4 years ago

@zpge cd into caffe root

cmake .
make -j4
zpge commented 4 years ago

@zpge cd into caffe root

cmake .
make -j4

Thanks. Actually I found channel_pruner in $ caffe root/build/tools/

zpge commented 4 years ago

Can you share the configuration file used for mobilenetv2-yolov3-lite here? I want to how to do pruning. Is there any guidlines?