Open ttanzhiqiang opened 2 years ago
@ttanzhiqiang thanks a lot for sharing , hwo much inference speed were you able to reach !!!
17ms/pcs
@ttanzhiqiang thanks a lot for the response can you share the inference code for running the onnx converted / tensorrt converted model
@ttanzhiqiang thnak for response in here i see only cpp is there python version
look https://github.com/ttanzhiqiang/onnx_tensorrt_project