WisconsinAIVision / yolact_edge

The first competitive instance segmentation approach that runs on small edge devices at real-time speeds.
MIT License
1.26k stars 272 forks source link

How to run yolact_edge in jetson AGX Xavier? #136

Open MiaoRain opened 2 years ago

MiaoRain commented 2 years ago

Hi, How to run yolact_edge in jetson AGX Xavier? Thanks!

malfonsoNeoris commented 2 years ago

hi Miao.. we have succesfully converted the resulting model to trt. Diferently from other models ( where you have only one trt file or you can load it on deepstream) here the conversion create several small trt for the different parts of the model, and then you just load the trt ( with yolact_edge.. ) and run inference with it

MiaoRain commented 2 years ago

hi Miao.. we have succesfully converted the resulting model to trt. Diferently from other models ( where you have only one trt file or you can load it on deepstream) here the conversion create several small trt for the different parts of the model, and then you just load the trt ( with yolact_edge.. ) and run inference with it

Hi thanks for replying. Right now I have got several trt, but how to inference them? or could you send the inference script to my email 260599780@qq.com. Thanks a lot.

haotian-liu commented 2 years ago

Hi, there is actually no difference between inferencing on Jetson AGX Xavier and on a normal Ubuntu machine. What is the problem that you meet?

MiaoRain commented 2 years ago

Hi, there is actually no difference between inferencing on Jetson AGX Xavier and on a normal Ubuntu machine. What is the problem that you meet? Hi right now it seems ok. Thanks a lot.

MiaoRain commented 2 years ago

Hi, there is actually no difference between inferencing on Jetson AGX Xavier and on a normal Ubuntu machine. What is the problem that you meet? Hi,haotian, I wonder if yolact++ could be converted to trt? Have you tested it?

haotian-liu commented 2 years ago

I haven't tried converting it to TensorRT. There might be a little more work when converting the deformable convolution.

MiaoRain commented 2 years ago

I haven't tried converting it to TensorRT. There might be a little more work when converting the deformable convolution.

I think so, thanks

MiaoRain commented 2 years ago

I haven't tried converting it to TensorRT. There might be a little more work when converting the deformable convolution.

Hi haotian, rightnow mobilenetv2 model just reaches 25fps in xavier which is even slower than renet101 in your paper. How to futher improve the inference speed? thanks. python3 eval.py --trained_model=./weights/0808/yolact_mobilenetv2_221_80000.pth --config=yolact_edge_mobilenetv2_config --use_fp16_tensorrt --use_tensorrt_safe_mode --benchmark --trt_batch_size=8

haotian-liu commented 2 years ago

Are there any modifications / changes towards the code base?