leovandriel / caffe2_cpp_tutorial

C++ transcripts of the Caffe2 Python tutorials and other C++ example code
BSD 2-Clause "Simplified" License
431 stars 94 forks source link

detectron net create error #68

Open HappyKerry opened 6 years ago

HappyKerry commented 6 years ago

I changed detectron pkl model to caffe2 pb model, errors happened when called CAFFE_ENFORCE(workspace.CreateNet(model.predict.net)); in imagenet.cc

terminate called after throwing an instance of 'caffe2::EnforceNotMet' what(): [enforce fail at operator.cc:185] op. Cannot create operator of type 'GenerateProposals' on the device 'CUDA'. Verify that implementation for the corresponding device exist. It might also happen if the binary is not linked with the operator implementation code. If Python frontend is used it might happen if dyndep.InitOpsLibrary call is missing. Operator def: input: "rpn_cls_probs_fpn2_cpu" input: "rpn_bbox_pred_fpn2_cpu" input: "im_info" input: "anchor2_cpu" output: "rpn_rois_fpn2" output: "rpn_roi_probs_fpn2" name: "" type: "GenerateProposals" arg { name: "nms_thres" f: 0.7 } arg { name: "min_size" f: 0 } arg { name: "spatial_scale" f: 0.25 } arg { name: "correct_transform_coords" i: 1 } arg { name: "post_nms_topN" i: 1000 } arg { name: "pre_nms_topN" i: 1000 } device_option { device_type: 1 }

leovandriel commented 6 years ago

It seems GenerateProposals was not implemented for CUDA, but device_type is set to 1. How did you make the caffe2 model?

HappyKerry commented 6 years ago

@leonardvandriel I changed the model using the following script https://github.com/facebookresearch/Detectron/blob/master/tools/convert_pkl_to_pb.py

leovandriel commented 6 years ago

You could try to change device_option { device_type: 1 } from GPU to CPU, using the op.mutable_device_option()->set_device_type(CPU). You can find the op by iterating through the model (NetDef) with for (auto& op : net.op()) where op.type() == "GenerateProposals". Perhaps take a look at net_gradient.cc to see how to modify a model.

HappyKerry commented 6 years ago

@leonardvandriel I changed device_option of "GenerateProposals",but it needs Tensor input im_info

terminate called after throwing an instance of 'caffe2::EnforceNotMet' what(): [enforce fail at blob.h:81] IsType(). wrong type for the Blob instance. Blob contains caffe2::Tensor while caller expects caffe2::Tensor . Offending Blob name: im_info. Error from operator: input: "rpn_cls_probs_fpn2_cpu" input: "rpn_bbox_pred_fpn2_cpu" input: "im_info" input: "anchor2_cpu" output: "rpn_rois_fpn2" output: "rpn_roi_probs_fpn2" name: "" type: "GenerateProposals" arg { name: "nms_thres" f: 0.7 } arg { name: "min_size" f: 0 } arg { name: "spatial_scale" f: 0.25 } arg { name: "correct_transform_coords" i: 1 } arg { name: "post_nms_topN" i: 1000 } arg { name: "pre_nms_topN" i: 1000 } device_option { device_type: 0 cuda_gpu_id: 0 }

leovandriel commented 6 years ago

I'm afraid this goes a bit beyond my understanding of the detectron architecture. You could chase this down by selectively moving operation to the CPU/GPU they belong, but that might be a dead end. Perhaps there's a way to export a working model from, say, python, to get a better understanding of where the model conversion has failed.