ShaoqingRen / SPP_net

SPP_net : Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition
363 stars 236 forks source link

box regression error #27

Closed jackiechensuper closed 9 years ago

jackiechensuper commented 9 years ago

Dear Shaoqing, I run the SPP in linux, most of it run smoothly in script_spp_voc. However, I got following error in box regression when I set the utilizing layer as 6 or 7: feature stats: 1/200 Cell contents reference from a non-cell array object.

Error in spp_poolX_to_fcX (line 39) feat_gpu = max(0, bsxfun(@plus, spp_model.cnn.layers(i).weights_gpu{1} * feat_gpu, ...

Error in spp_feature_stats (line 58) X = spp_poolX_to_fcX(X, layer, spp_model, conf.use_gpu);

Error in spp_train_bbox_regressor (line 60) opts.feat_norm_mean = spp_feature_stats(imdb, roidb, opts.layer, spp_model);

Error in spp_exp_bbox_reg_train_and_test_voc (line 37) bbox_reg = spp_train_bbox_regressor(opts.imdb_train, opts.roidb_train, ld.spp_model, ...

Error in Script_spp_voc (line 83) spp_exp_bbox_reg_train_and_test_voc(opts);

Thanks for your help. Regards Jackie

wuqiangch commented 9 years ago

@jackiechensuper .I have some problems in runing Script_spp_voc.m,when I run it on ubuntu. https://github.com/ShaoqingRen/SPP_net, And https://github.com/ShaoqingRen/caffe. I used the two files to compile the SPP_net. I have run the "spp_demo",everything is ok . This is my parameters. caffe_net_file = fullfile(pwd, 'data/Zeiler_conv5_new/Zeiler_conv5'); caffe_net_def_file = fullfile(pwd, 'data/Zeiler_conv5_new/Zeiler_spm_scale224_test_conv5.prototxt');

when I run the 'Script_spp_voc.m', there is something wrong .I have change nothing except the net parameters. This is my net parameters: opts.net_file = fullfile(pwd, 'data/Zeiler_conv5_new/Zeiler_conv5'); opts.net_def_file = fullfile(pwd,'data/Zeiler_conv5_new/Zeiler_spm_scale224_test_conv5.prototxt'); opts.spp_params_def = fullfile(pwd, 'data/Zeiler_conv5_new/spp_config'); opts.finetune_net_def_file = fullfile(pwd, 'model-defs/pascal_finetune_fc_spm_solver_new.prototxt'); Everything is ok,but until '' rst = caffe('train', {data_train{1}, label_train{1}})''.Matlab closed,but there is no error information .So I have do some tests. I put the "rst = caffe('test', {data_test{1}, label_test{1}})" before rst = caffe('train', {data_train{1}, label_train{1}}). The test of caffe can run , but the train of caffe also can't run. Thanks!

ShaoqingRen commented 9 years ago

@jackiechensuper

I think you can check whether spp_model.cnn.layers(i).weights_gpu exists, if not you can use spp_model.cnn.layers = spp_layers_in_gpu(spp_model.cnn.layers); to generate them.

ShaoqingRen commented 9 years ago

@wuqiangch

In finetuning, a log file for caffe_mex is built in caffe_log_file = fullfile(work_dir, opts.finetune_cache_name), you can check this file to find the problem.

wuqiangch commented 9 years ago

@ShaoqingRen ,in the caffe_log_file, I can't find any problems. This is my caffe_log_file. Log file created at: 2014/12/30 09:16:48 Running on machine: wuqiang-Alienware-X51-R2 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I1230 09:16:48.314431 21329 matcaffe.cpp:464] Loading from /home/wuqiang/wq/SPP_net/model-defs/pascal_finetune_fc_spm_solver_new.prototxt I1230 09:16:48.314664 21329 matcaffe.cpp:468] Starting Optimization I1230 09:16:48.314718 21329 solver.cpp:36] Initializing solver from parameters: test_iter: 60 test_interval: 400 display: 20 max_iter: 200000 lr_policy: "vstep" momentum: 0.9 weight_decay: 0.0005 snapshot: 0 snapshot_prefix: "pascal_finetune_fc_spm" solver_mode: GPU net: "pascal_finetune_fc_spm_train_test_new.prototxt" vstep_lr: 0.001 vstep_lr: 0.0001 vstep_lr: 1e-05 vstep_size: 30000 vstep_size: 30000 vstep_size: 30000 I1230 09:16:48.314729 21329 solver.cpp:71] Creating training net from net file: pascal_finetune_fc_spm_train_test_new.prototxt I1230 09:16:48.315081 21329 net.cpp:40] Initializing net from parameters: layers { bottom: "data" top: "fc6" name: "fc6" type: INNER_PRODUCT blobs_lr: 1 blobs_lr: 2 inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 1 } } } layers { bottom: "fc6" top: "fc6" name: "relu6" type: RELU } layers { bottom: "fc6" top: "fc6" name: "drop6" type: DROPOUT dropout_param { dropout_ratio: 0.5 } } layers { bottom: "fc6" top: "fc7" name: "fc7" type: INNER_PRODUCT blobs_lr: 1 blobs_lr: 2 inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 1 } } } layers { bottom: "fc7" top: "fc7" name: "relu7" type: RELU } layers { bottom: "fc7" top: "fc7" name: "drop7" type: DROPOUT dropout_param { dropout_ratio: 0.5 } } layers { bottom: "fc7" top: "fc8_pascal" name: "fc8_pascal" type: INNER_PRODUCT blobs_lr: 1 blobs_lr: 2 weight_decay: 1 weight_decay: 0 inner_product_param { num_output: 21 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 1 } } } layers { bottom: "fc8_pascal" bottom: "label" top: "loss" name: "loss" type: SOFTMAX_LOSS } layers { bottom: "fc8_pascal" bottom: "label" top: "accuracy" name: "accuracy" type: ACCURACY } input: "data" input: "label" input_dim: 128 input_dim: 12800 input_dim: 1 input_dim: 1 input_dim: 128 input_dim: 1 input_dim: 1 input_dim: 1 state { phase: TRAIN } I1230 09:16:48.323684 21329 net.cpp:359] Input 0 -> data I1230 09:16:48.323706 21329 net.cpp:359] Input 1 -> label I1230 09:16:48.323722 21329 net.cpp:68] Creating Layer label_input_1_split I1230 09:16:48.323726 21329 net.cpp:402] label_input_1_split <- label I1230 09:16:48.323732 21329 net.cpp:357] label_input_1_split -> label_input_1_split_0 I1230 09:16:48.323740 21329 net.cpp:357] label_input_1_split -> label_input_1_split_1 I1230 09:16:48.323746 21329 net.cpp:97] Setting up label_input_1_split I1230 09:16:48.324661 21329 net.cpp:104] Top shape: 128 1 1 1 (128) I1230 09:16:48.324668 21329 net.cpp:104] Top shape: 128 1 1 1 (128) I1230 09:16:48.324678 21329 net.cpp:68] Creating Layer fc6 I1230 09:16:48.324681 21329 net.cpp:402] fc6 <- data I1230 09:16:48.324688 21329 net.cpp:357] fc6 -> fc6 I1230 09:16:48.324697 21329 net.cpp:97] Setting up fc6 I1230 09:16:49.765981 21329 net.cpp:104] Top shape: 128 4096 1 1 (524288) I1230 09:16:49.766016 21329 net.cpp:68] Creating Layer relu6 I1230 09:16:49.766022 21329 net.cpp:402] relu6 <- fc6 I1230 09:16:49.766031 21329 net.cpp:346] relu6 -> fc6 (in-place) I1230 09:16:49.766037 21329 net.cpp:97] Setting up relu6 I1230 09:16:49.766041 21329 net.cpp:104] Top shape: 128 4096 1 1 (524288) I1230 09:16:49.766048 21329 net.cpp:68] Creating Layer drop6 I1230 09:16:49.766052 21329 net.cpp:402] drop6 <- fc6 I1230 09:16:49.766057 21329 net.cpp:346] drop6 -> fc6 (in-place) I1230 09:16:49.766062 21329 net.cpp:97] Setting up drop6 I1230 09:16:49.766361 21329 net.cpp:104] Top shape: 128 4096 1 1 (524288) I1230 09:16:49.766373 21329 net.cpp:68] Creating Layer fc7 I1230 09:16:49.766377 21329 net.cpp:402] fc7 <- fc6 I1230 09:16:49.766383 21329 net.cpp:357] fc7 -> fc7 I1230 09:16:49.766391 21329 net.cpp:97] Setting up fc7 I1230 09:16:50.224925 21329 net.cpp:104] Top shape: 128 4096 1 1 (524288) I1230 09:16:50.224958 21329 net.cpp:68] Creating Layer relu7 I1230 09:16:50.224963 21329 net.cpp:402] relu7 <- fc7 I1230 09:16:50.224970 21329 net.cpp:346] relu7 -> fc7 (in-place) I1230 09:16:50.224977 21329 net.cpp:97] Setting up relu7 I1230 09:16:50.224980 21329 net.cpp:104] Top shape: 128 4096 1 1 (524288) I1230 09:16:50.224987 21329 net.cpp:68] Creating Layer drop7 I1230 09:16:50.224990 21329 net.cpp:402] drop7 <- fc7 I1230 09:16:50.224995 21329 net.cpp:346] drop7 -> fc7 (in-place) I1230 09:16:50.225000 21329 net.cpp:97] Setting up drop7 I1230 09:16:50.225003 21329 net.cpp:104] Top shape: 128 4096 1 1 (524288) I1230 09:16:50.225011 21329 net.cpp:68] Creating Layer fc8_pascal I1230 09:16:50.225014 21329 net.cpp:402] fc8_pascal <- fc7 I1230 09:16:50.225021 21329 net.cpp:357] fc8_pascal -> fc8_pascal I1230 09:16:50.225028 21329 net.cpp:97] Setting up fc8_pascal I1230 09:16:50.227262 21329 net.cpp:104] Top shape: 128 21 1 1 (2688) I1230 09:16:50.227277 21329 net.cpp:68] Creating Layer fc8_pascal_fc8_pascal_0_split I1230 09:16:50.227282 21329 net.cpp:402] fc8_pascal_fc8_pascal_0_split <- fc8_pascal I1230 09:16:50.227287 21329 net.cpp:357] fc8_pascal_fc8_pascal_0_split -> fc8_pascal_fc8_pascal_0_split_0 I1230 09:16:50.227293 21329 net.cpp:357] fc8_pascal_fc8_pascal_0_split -> fc8_pascal_fc8_pascal_0_split_1 I1230 09:16:50.227299 21329 net.cpp:97] Setting up fc8_pascal_fc8_pascal_0_split I1230 09:16:50.227303 21329 net.cpp:104] Top shape: 128 21 1 1 (2688) I1230 09:16:50.227306 21329 net.cpp:104] Top shape: 128 21 1 1 (2688) I1230 09:16:50.227314 21329 net.cpp:68] Creating Layer loss I1230 09:16:50.227318 21329 net.cpp:402] loss <- fc8_pascal_fc8_pascal_0_split_0 I1230 09:16:50.227321 21329 net.cpp:402] loss <- label_input_1_split_0 I1230 09:16:50.227327 21329 net.cpp:357] loss -> loss I1230 09:16:50.227332 21329 net.cpp:97] Setting up loss I1230 09:16:50.227377 21329 net.cpp:104] Top shape: 1 1 1 1 (1) I1230 09:16:50.227381 21329 net.cpp:110] with loss weight 1 I1230 09:16:50.227394 21329 net.cpp:68] Creating Layer accuracy I1230 09:16:50.227397 21329 net.cpp:402] accuracy <- fc8_pascal_fc8_pascal_0_split_1 I1230 09:16:50.227401 21329 net.cpp:402] accuracy <- label_input_1_split_1 I1230 09:16:50.227406 21329 net.cpp:357] accuracy -> accuracy I1230 09:16:50.227411 21329 net.cpp:97] Setting up accuracy I1230 09:16:50.227416 21329 net.cpp:104] Top shape: 1 1 1 1 (1) I1230 09:16:50.227421 21329 net.cpp:173] accuracy does not need backward computation. I1230 09:16:50.227423 21329 net.cpp:171] loss needs backward computation. I1230 09:16:50.227427 21329 net.cpp:171] fc8_pascal_fc8_pascal_0_split needs backward computation. I1230 09:16:50.227430 21329 net.cpp:171] fc8_pascal needs backward computation. I1230 09:16:50.227433 21329 net.cpp:171] drop7 needs backward computation. I1230 09:16:50.227437 21329 net.cpp:171] relu7 needs backward computation. I1230 09:16:50.227439 21329 net.cpp:171] fc7 needs backward computation. I1230 09:16:50.227442 21329 net.cpp:171] drop6 needs backward computation. I1230 09:16:50.227445 21329 net.cpp:171] relu6 needs backward computation. I1230 09:16:50.227448 21329 net.cpp:171] fc6 needs backward computation. I1230 09:16:50.227463 21329 net.cpp:173] label_input_1_split does not need backward computation. I1230 09:16:50.227465 21329 net.cpp:209] This network produces output accuracy I1230 09:16:50.227468 21329 net.cpp:209] This network produces output loss I1230 09:16:50.227479 21329 net.cpp:475] Collecting Learning Rate and Weight Decay. I1230 09:16:50.227484 21329 net.cpp:220] Network initialization done. I1230 09:16:50.227488 21329 net.cpp:221] Memory required for data: 12616200 I1230 09:16:50.227777 21329 solver.cpp:155] Creating test net (#0) specified by net file: pascal_finetune_fc_spm_train_test_new.prototxt I1230 09:16:50.227884 21329 net.cpp:40] Initializing net from parameters: layers { bottom: "data" top: "fc6" name: "fc6" type: INNER_PRODUCT blobs_lr: 1 blobs_lr: 2 inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 1 } } } layers { bottom: "fc6" top: "fc6" name: "relu6" type: RELU } layers { bottom: "fc6" top: "fc6" name: "drop6" type: DROPOUT dropout_param { dropout_ratio: 0.5 } } layers { bottom: "fc6" top: "fc7" name: "fc7" type: INNER_PRODUCT blobs_lr: 1 blobs_lr: 2 inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 1 } } } layers { bottom: "fc7" top: "fc7" name: "relu7" type: RELU } layers { bottom: "fc7" top: "fc7" name: "drop7" type: DROPOUT dropout_param { dropout_ratio: 0.5 } } layers { bottom: "fc7" top: "fc8_pascal" name: "fc8_pascal" type: INNER_PRODUCT blobs_lr: 1 blobs_lr: 2 weight_decay: 1 weight_decay: 0 inner_product_param { num_output: 21 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 1 } } } layers { bottom: "fc8_pascal" bottom: "label" top: "loss" name: "loss" type: SOFTMAX_LOSS } layers { bottom: "fc8_pascal" bottom: "label" top: "accuracy" name: "accuracy" type: ACCURACY } input: "data" input: "label" input_dim: 128 input_dim: 12800 input_dim: 1 input_dim: 1 input_dim: 128 input_dim: 1 input_dim: 1 input_dim: 1 state { phase: TEST } I1230 09:16:50.227964 21329 net.cpp:359] Input 0 -> data I1230 09:16:50.243989 21329 net.cpp:359] Input 1 -> label I1230 09:16:50.244014 21329 net.cpp:68] Creating Layer label_input_1_split I1230 09:16:50.244026 21329 net.cpp:402] label_input_1_split <- label I1230 09:16:50.244032 21329 net.cpp:357] label_input_1_split -> label_input_1_split_0 I1230 09:16:50.244040 21329 net.cpp:357] label_input_1_split -> label_input_1_split_1 I1230 09:16:50.244046 21329 net.cpp:97] Setting up label_input_1_split I1230 09:16:50.244050 21329 net.cpp:104] Top shape: 128 1 1 1 (128) I1230 09:16:50.244053 21329 net.cpp:104] Top shape: 128 1 1 1 (128) I1230 09:16:50.244061 21329 net.cpp:68] Creating Layer fc6 I1230 09:16:50.244065 21329 net.cpp:402] fc6 <- data I1230 09:16:50.244071 21329 net.cpp:357] fc6 -> fc6 I1230 09:16:50.244077 21329 net.cpp:97] Setting up fc6 I1230 09:16:51.710979 21329 net.cpp:104] Top shape: 128 4096 1 1 (524288) I1230 09:16:51.711014 21329 net.cpp:68] Creating Layer relu6 I1230 09:16:51.711020 21329 net.cpp:402] relu6 <- fc6 I1230 09:16:51.711027 21329 net.cpp:346] relu6 -> fc6 (in-place) I1230 09:16:51.711035 21329 net.cpp:97] Setting up relu6 I1230 09:16:51.711038 21329 net.cpp:104] Top shape: 128 4096 1 1 (524288) I1230 09:16:51.711045 21329 net.cpp:68] Creating Layer drop6 I1230 09:16:51.711050 21329 net.cpp:402] drop6 <- fc6 I1230 09:16:51.711053 21329 net.cpp:346] drop6 -> fc6 (in-place) I1230 09:16:51.711058 21329 net.cpp:97] Setting up drop6 I1230 09:16:51.711062 21329 net.cpp:104] Top shape: 128 4096 1 1 (524288) I1230 09:16:51.711071 21329 net.cpp:68] Creating Layer fc7 I1230 09:16:51.711073 21329 net.cpp:402] fc7 <- fc6 I1230 09:16:51.711079 21329 net.cpp:357] fc7 -> fc7 I1230 09:16:51.711088 21329 net.cpp:97] Setting up fc7 I1230 09:16:52.187593 21329 net.cpp:104] Top shape: 128 4096 1 1 (524288) I1230 09:16:52.187626 21329 net.cpp:68] Creating Layer relu7 I1230 09:16:52.187633 21329 net.cpp:402] relu7 <- fc7 I1230 09:16:52.187640 21329 net.cpp:346] relu7 -> fc7 (in-place) I1230 09:16:52.187647 21329 net.cpp:97] Setting up relu7 I1230 09:16:52.187651 21329 net.cpp:104] Top shape: 128 4096 1 1 (524288) I1230 09:16:52.187659 21329 net.cpp:68] Creating Layer drop7 I1230 09:16:52.187661 21329 net.cpp:402] drop7 <- fc7 I1230 09:16:52.187667 21329 net.cpp:346] drop7 -> fc7 (in-place) I1230 09:16:52.187671 21329 net.cpp:97] Setting up drop7 I1230 09:16:52.187675 21329 net.cpp:104] Top shape: 128 4096 1 1 (524288) I1230 09:16:52.187683 21329 net.cpp:68] Creating Layer fc8_pascal I1230 09:16:52.187686 21329 net.cpp:402] fc8_pascal <- fc7 I1230 09:16:52.187718 21329 net.cpp:357] fc8_pascal -> fc8_pascal I1230 09:16:52.187728 21329 net.cpp:97] Setting up fc8_pascal I1230 09:16:52.190203 21329 net.cpp:104] Top shape: 128 21 1 1 (2688) I1230 09:16:52.190218 21329 net.cpp:68] Creating Layer fc8_pascal_fc8_pascal_0_split I1230 09:16:52.190222 21329 net.cpp:402] fc8_pascal_fc8_pascal_0_split <- fc8_pascal I1230 09:16:52.190227 21329 net.cpp:357] fc8_pascal_fc8_pascal_0_split -> fc8_pascal_fc8_pascal_0_split_0 I1230 09:16:52.190235 21329 net.cpp:357] fc8_pascal_fc8_pascal_0_split -> fc8_pascal_fc8_pascal_0_split_1 I1230 09:16:52.190244 21329 net.cpp:97] Setting up fc8_pascal_fc8_pascal_0_split I1230 09:16:52.190248 21329 net.cpp:104] Top shape: 128 21 1 1 (2688) I1230 09:16:52.190250 21329 net.cpp:104] Top shape: 128 21 1 1 (2688) I1230 09:16:52.190258 21329 net.cpp:68] Creating Layer loss I1230 09:16:52.190263 21329 net.cpp:402] loss <- fc8_pascal_fc8_pascal_0_split_0 I1230 09:16:52.190266 21329 net.cpp:402] loss <- label_input_1_split_0 I1230 09:16:52.190273 21329 net.cpp:357] loss -> loss I1230 09:16:52.190279 21329 net.cpp:97] Setting up loss I1230 09:16:52.190286 21329 net.cpp:104] Top shape: 1 1 1 1 (1) I1230 09:16:52.190289 21329 net.cpp:110] with loss weight 1 I1230 09:16:52.190302 21329 net.cpp:68] Creating Layer accuracy I1230 09:16:52.190305 21329 net.cpp:402] accuracy <- fc8_pascal_fc8_pascal_0_split_1 I1230 09:16:52.190310 21329 net.cpp:402] accuracy <- label_input_1_split_1 I1230 09:16:52.190315 21329 net.cpp:357] accuracy -> accuracy I1230 09:16:52.190321 21329 net.cpp:97] Setting up accuracy I1230 09:16:52.190325 21329 net.cpp:104] Top shape: 1 1 1 1 (1) I1230 09:16:52.190328 21329 net.cpp:173] accuracy does not need backward computation. I1230 09:16:52.190331 21329 net.cpp:171] loss needs backward computation. I1230 09:16:52.190335 21329 net.cpp:171] fc8_pascal_fc8_pascal_0_split needs backward computation. I1230 09:16:52.190337 21329 net.cpp:171] fc8_pascal needs backward computation. I1230 09:16:52.190340 21329 net.cpp:171] drop7 needs backward computation. I1230 09:16:52.190343 21329 net.cpp:171] relu7 needs backward computation. I1230 09:16:52.190347 21329 net.cpp:171] fc7 needs backward computation. I1230 09:16:52.190351 21329 net.cpp:171] drop6 needs backward computation. I1230 09:16:52.190353 21329 net.cpp:171] relu6 needs backward computation. I1230 09:16:52.190356 21329 net.cpp:171] fc6 needs backward computation. I1230 09:16:52.190361 21329 net.cpp:173] label_input_1_split does not need backward computation. I1230 09:16:52.190364 21329 net.cpp:209] This network produces output accuracy I1230 09:16:52.190367 21329 net.cpp:209] This network produces output loss I1230 09:16:52.190378 21329 net.cpp:475] Collecting Learning Rate and Weight Decay. I1230 09:16:52.190383 21329 net.cpp:220] Network initialization done. I1230 09:16:52.190387 21329 net.cpp:221] Memory required for data: 12616200 I1230 09:16:52.190469 21329 solver.cpp:45] Solver scaffolding done. I1230 09:16:52.190482 21329 matcaffe.cpp:473] Recovery from /home/wuqiang/wq/SPP_net/data/Zeiler_conv5_new/Zeiler_conv5 I1230 09:16:52.786281 21329 net.cpp:713] Copying source layer label_input_1_split I1230 09:16:52.786317 21329 net.cpp:710] Ignoring source layer conv1 I1230 09:16:52.786321 21329 net.cpp:710] Ignoring source layer relu1 I1230 09:16:52.786324 21329 net.cpp:710] Ignoring source layer norm1 I1230 09:16:52.786327 21329 net.cpp:710] Ignoring source layer pool1 I1230 09:16:52.786330 21329 net.cpp:710] Ignoring source layer conv2 I1230 09:16:52.786334 21329 net.cpp:710] Ignoring source layer relu2 I1230 09:16:52.786336 21329 net.cpp:710] Ignoring source layer norm2 I1230 09:16:52.786339 21329 net.cpp:710] Ignoring source layer pool2 I1230 09:16:52.786342 21329 net.cpp:710] Ignoring source layer conv3 I1230 09:16:52.786345 21329 net.cpp:710] Ignoring source layer relu3 I1230 09:16:52.786348 21329 net.cpp:710] Ignoring source layer conv4 I1230 09:16:52.786351 21329 net.cpp:710] Ignoring source layer relu4 I1230 09:16:52.786355 21329 net.cpp:710] Ignoring source layer conv5 I