Stephenfang51 / tracklite

Using Tensort to speed up yolov3 with deepsort for MOT
157 stars 48 forks source link

Error when convert .weights to onnx #3

Open job2003 opened 4 years ago

job2003 commented 4 years ago

$ python3 yolov3_to_onnx.py Layer of type yolo not supported, skipping ONNX node generation. Layer of type yolo not supported, skipping ONNX node generation. Layer of type yolo not supported, skipping ONNX node generation. graph YOLOv3-608 ( %000_net[FLOAT, 64x3x416x416] ) initializers (




105_convolutional, %105_convolutional_bn_scale, %105_convolutional_bn_bias, %105_convolutional_bn_mean, %105_convolutional_bn_var) %105_convolutional_lrelu = LeakyRelualpha = 0.1 %106_convolutional = Conv[auto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1]](%105_convolutional_lrelu, %106_convolutional_conv_weights, %106_convolutional_conv_bias) return %082_convolutional, %094_convolutional, %106_convolutional } Traceback (most recent call last): File "yolov3_to_onnx.py", line 749, in main() File "yolov3_to_onnx.py", line 741, in main onnx.checker.check_model(yolov3_model_def) File "/home/nvidia/.local/lib/python3.6/site-packages/onnx/checker.py", line 86, in check_model C.check_model(model.SerializeToString()) onnx.onnx_cpp2py_export.checker.ValidationError: Op registered for Upsample is depracted in domain_version of 10

==> Context: Bad node spec: input: "085_convolutional_lrelu" input: "086_upsample_scale" output: "086_upsample" name: "086_upsample" op_type: "Upsample" attribute { name: "mode" s: "nearest" type: STRING }

Stephenfang51 commented 4 years ago

Context: Bad node spec: input: "085_convolutional_lrelu" input: "086_upsample_scale" output: "086_upsample" name: "086_upsample" op_type: "Upsample" attribute { name: "mode" s: "nearest" type: STRING }

Dear Sir

What is your onnx version?

after googling the issue, the problem is your onnx version, please try onnx==1.4.0 or 1.4.1

Thanks

job2003 commented 4 years ago

my onnx version is 1.5. Can I use "/usr/src/tensorrt/samples/python/yolov3_onnx" example to convert .weights to .onnx and then .trt. and then use the final .trt in this project.

job2003 commented 4 years ago

$python3 run_tracker.py --usb Opening in BLOCKING MODE Loading weights from ./deep_sort/deep/checkpoint/ckpt.t7... Done! Reading engine from file ./weights/yolov3_int8.engine [TensorRT] ERROR: deserializationUtils.cpp (528) - Serialization Error in load: 0 (Serialized engine contains plugin, but no plugin factory was provided. To deserialize an engine without a factory, please use IPluginV2 instead.) [TensorRT] ERROR: INVALID_STATE: std::exception [TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed. Traceback (most recent call last): File "run_tracker.py", line 107, in main() File "run_tracker.py", line 88, in main tracker = Tracker(cfg, args.engine_path) File "/home/nvidia/workspace/detect_tracking/tracklite/tracker/tracker.py", line 24, in init self.context = self.engine.create_execution_context() AttributeError: 'NoneType' object has no attribute 'create_execution_context'

what's the problem?

Stephenfang51 commented 4 years ago

my onnx version is 1.5. Can I use "/usr/src/tensorrt/samples/python/yolov3_onnx" example to convert .weights to .onnx and then .trt. and then use the final .trt in this project.

I think it's inappropriate since the project based on lower version of tensorRT and onnx, that is why the error message shows [TensorRT] ERROR: deserializationUtils.cpp (528) - Serialization Error in load: 0 (Serialized engine contains plugin, but no plugin factory was provided. To deserialize an engine without a factory, please use IPluginV2 instead.)

sorry for the low version of trt and onnx, the project is based on JetsonNano.

job2003 commented 4 years ago

i use xavier, my tensorrt version is 6, onnx version is 1.5. Is it possible to use tensorrt 5 and onnx version 1.4. Then is can degrade onnx to 1.4.

Stephenfang51 commented 4 years ago

i use xavier, my tensorrt version is 6, onnx version is 1.5. Is it possible to use tensorrt 5 and onnx version 1.4. Then is can degrade onnx to 1.4.

of course it can be, I was running this on trt5 and onnx 1.4

job2003 commented 4 years ago

I use trt6 and onnx 1.4 on xavier, it cannot work. same mistakes occur

do you have plan to update the project to higher trt and onnx version?

jamie5tgg commented 4 years ago

I'm getting the same issue too 😭 using onnx 1.4.1 and I've tried 1.4.0. Has anyone found a fix for this? I'd love to try this out @Stephenfang51 it's really great to see Deep Sort and Yolo put together for Jetson Nano!

MuhammadAsadJaved commented 4 years ago

+1 . I have the same problem during python3 run_tracker.py . My TensorRT version is 7.1.0-1. Note: I have converted weights > onnx > engine using other TensorRT version.