Closed lucasjinreal closed 2 years ago
you can load it with
TRTModule(engine, input_names, output_names)
and if you want to use inference api in this repo, you can create a wrap detector with
TRTDetector(trt_module, model_cfg)
@grimoire what's engine? the engine file path? How to get it?
how to use it if I don't want TRTModule, I just need tensorrt python API, how to inference then?
@grimoire Hi, how to specific this:
[TensorRT] ERROR: 3: [executionContext.cpp::setBindingDimensions::945] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::setBindingDimensions::945, condition: profileMinDims.d[i] <= dimensions.d[i]. Supplied binding dimension [1,3,416,608] for bindings[0] exceed min ~ max range at index 2, maximum dimension in profile is 640, minimum dimension in profile is 608, but supplied dimension is 416.
)
[TensorRT] ERROR: 3: [executionContext.cpp::resolveSlots::1480] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::resolveSlots::1480, condition: allInputDimensionsSpecified(routine)
)
Am using something like this:
trt_model = init_trt_model('yolov3.trt')
# print(trt_model)
image_path = './demo/demo.jpg'
# create wrap detector
trt_detector = create_wrap_detector(trt_model, cfg_path, 0)
# result share same format as mmdetection
result = inference_detector(trt_detector, image_path)
# visualize
trt_detector.show_result(
image_path,
result,
score_thr=0.5,
win_name='mmdet2trt',
show=True)
You can get the engine:
# trt_model is a TRTModule
# save converted model
torch.save(trt_model.state_dict(), save_model_path)
# save engine if you want to use it in c++ api
with open(save_engine_path, mode='wb') as f:
f.write(trt_model.state_dict()['engine'])
Of cause, you can use the engine with pure TensorRT API. Read the code of TRTModule
as an example.
About the ERROR, Please read FAQ
@grimoire Hi, may I ask how to load these plugins if I want using generated engine in pure tensorrt API (without TRTModule involved)?
I got an error like this:
INFO 12.14 10:23:41 common.py:254: [INFO] Reading engine from file yolov3.engine
[12/14/2021-10:23:46] [TRT] [E] 1: [pluginV2Runner.cpp::load::290] Error Code 1: Serialization (Serialization assertion creator failed.Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
[12/14/2021-10:23:46] [TRT] [E] 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
Traceback (most recent call last):
You can use ctypes.CDLL
to load the plugin libs. Or dlopen
if you are using c++ api.
@grimoire Oh, I sucessfully loaded plugins. But I got some errors when try inference on dynamic input shape and dynamic output trt engine.
Do u what could be the reason?
[12/14/2021-13:01:36] [TRT] [E] 3: [executionContext.cpp::resolveSlots::1480] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::resolveSlots::1480, condition: allInputDimensionsSpecified(routine)
)
(0)
input shape: (1, 3, 608, 608)
[12/14/2021-13:01:36] [TRT] [E] 3: [executionContext.cpp::resolveSlots::1480] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::resolveSlots::1480, condition: allInputDimensionsSpecified(routine)
)
[12/14/2021-13:01:36] [TRT] [E] 2: [executionContext.cpp::enqueueInternal::366] Error Code 2: Internal Error (Could not resolve slots: )
推断耗时:0.001188039779663086s
[0]
Using
mmdet2trt
get a trt engine file. How to load it back in python?Using torch.load?