PRBonn / rangenet_lib

Inference module for RangeNet++ (milioto2019iros, chen2019iros)
MIT License
309 stars 72 forks source link

run the demo,got runtime_error #15

Closed LongruiDong closed 4 years ago

LongruiDong commented 4 years ago

Hi, after build project, run the demo:

================================================================================
scan: src/rangenet_lib/example/000000.bin
path: src/rangenet_lib/model/darknet53/
verbose: 1
================================================================================
Setting verbosity to: false
Trying to open model
Trying to deserialize previously stored: src/rangenet_lib/model/darknet53//model.trt
Could not deserialize TensorRT engine. 
Generating from sratch... This may take a while...
Trying to generate trt engine from : src/rangenet_lib/model/darknet53//model.onnx
Platform HAS fp16 support.
No DLA selected.
----------------------------------------------------------------
Input filename:   src/rangenet_lib/model/darknet53//model.onnx
ONNX IR version:  0.0.4
Opset version:    9
Producer name:    pytorch
Producer version: 1.1
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
 ----- Parsing of ONNX model src/rangenet_lib/model/darknet53//model.onnx is Done ---- 
Success picking up ONNX model
Failure creating engine from ONNX model
Current trial size is 8589934592
Failure creating engine from ONNX model
Current trial size is 4294967296
Failure creating engine from ONNX model
Current trial size is 2147483648
Failure creating engine from ONNX model
Current trial size is 1073741824
Failure creating engine from ONNX model
Current trial size is 536870912
Failure creating engine from ONNX model
Current trial size is 268435456
Failure creating engine from ONNX model
Current trial size is 134217728
Failure creating engine from ONNX model
Current trial size is 67108864
Failure creating engine from ONNX model
Current trial size is 33554432
Failure creating engine from ONNX model
Current trial size is 16777216
Failure creating engine from ONNX model
Current trial size is 8388608
Failure creating engine from ONNX model
Current trial size is 4194304
Failure creating engine from ONNX model
Current trial size is 2097152
Failure creating engine from ONNX model
Current trial size is 1048576
terminate called after throwing an instance of 'std::runtime_error'
  what():  ERROR: could not create engine from ONNX.
Aborted (core dumped)

Does anyone know where the bug is?

Thanks!

jbehley commented 4 years ago

This error is TensorRT related. You just have to run the code again until it reaches a size that is suitable for your GPU. (I personally never encountered this error, but I read that other people "fixed" it like this.)

LongruiDong commented 4 years ago

This error is TensorRT related. You just have to run the code again until it reaches a size that is suitable for your GPU. (I personally never encountered this error, but I read that other people "fixed" it like this.)

As you can see in #5 ,I tried adjusting MAX_WORKSPACE_SIZE and MIN_WORKSPACE_SIZE, but still get same error...

and I noticed there is a WARNING before parsing model.onnx :

WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).

I am wondering whether the Version conflict of ir_version led to this error?

Chen-Xieyuanli commented 4 years ago

This issue seems also caused by TensorRT and GUP versions. I will therefore also close this one, and feel free to ask me reopening it if needed.