Open lyuwenyu opened 1 year ago
Hello, thank you for the great work, I really fan of RT-DETR.
RT-DETR also can be deployed via mmdeploy with TensorRT and onnxruntime backends. Openvino is not supperted natively, but I use OpenVINOExecutionProvider and ort.GraphOptimizationLevel.ORT_DISABLE_ALL for onnxruntime. This settings helps me a little bit improve performance on cpu
Adding the example of running the trtinfer.py , the blog have details code as well for onnx as well https://zhuanlan.zhihu.com/p/657506252
# Missing import
import pycuda.driver as cuda
# New Import
import cv2
cuda.init()
device_ctx = cuda.Device(0).make_context()
mpath="../rtdetr_pytorch/rtdetr_r101vd_6x_coco_from_paddle.trt"
image_file="../rtdetr_pytorch/demo.jpg"
model = TRTInference(mpath, backend='cuda')
img = cv2.imread(image_file)
im_shape = np.array([[float(img.shape[0]), float(img.shape[1])]]).astype('float32')
size = np.array([[640,640]])
size = np.ascontiguousarray(size).astype(np.int32)
blob = {"images" : np.ascontiguousarray(im_shape), "orig_target_sizes": np.ascontiguousarray(size).astype(np.int32)}
res = model(blob)
print(res)
device_ctx.pop()
Hello, thank you for the great work, I really fan of RT-DETR.
RT-DETR also can be deployed via mmdeploy with TensorRT and onnxruntime backends. Openvino is not supperted natively, but I use OpenVINOExecutionProvider and ort.GraphOptimizationLevel.ORT_DISABLE_ALL for onnxruntime. This settings helps me a little bit improve performance on cpu
Can you please tell the command or the changes that we need to do if we want to use mmdeploy?
Can RTDETR be deployed on rk3568 ?
RT-DETR C++ Tensorrt implementation for V1 and V2
https://github.com/PrinceP/tensorrt-cpp-for-onnx?tab=readme-ov-file#rt-detr