Open lijoe123 opened 2 years ago
@lakshanthad. Hi, I saw your pulling request #484 to MMDeploy, which added NX to the support device list. Could you please give some help if possible?
Hi, I saw your issue https://github.com/open-mmlab/mmdeploy/issues/1063. For Jetpack 5.0.2, since there is no way to downgrade PyTorch, maybe you could try use MMCV with version >1.4.0. We did not test the latest MMCV on Jetsons though.
Hi, I saw your issue #1063. For Jetpack 5.0.2, since there is no way to downgrade PyTorch, maybe you could try use MMCV with version >1.4.0. We did not test the latest MMCV on Jetsons though.
Ok, thank you for your answer! But had not solve the problem above.
Yeah, looking forward to the reply from @lakshanthad
Yeah, looking forward to the reply from @lakshanthad
/(ㄒoㄒ)/~~,waiting for this guy!!!!!!!
Hi, @lijoe123 . Sorry that @lakshanthad did not respond to the issue. The device xavier nx
in #484 might be a mistake that is actually not verified. We will remove the device in the doc.
When i run the demo :
python ./tools/deploy.py \ configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \ $PATH_TO_MMDET/configs/retinanet/retinanet_r18_fpn_1x_coco.py \ retinanet_r18_fpn_1x_coco_20220407_171055-614fd399.pth \ $PATH_TO_MMDET/demo/demo.jpg \ --work-dir work_dir \ --show \ --device cuda:0 \ --dump-info
It occur this problem:And i had run the check_env:
After that i run the command accorss the issue #1059 , but i didn't work!!!!
/usr/src/tensorrt/bin/trtexec --onnx=./end2end.onnx --plugins=../../mmdeploy/lib/libmmdeploy_tensorrt_ops.so --workspace=6000 --fp16 --saveEngine=end2end.engine
it's outputI'm looking forward to your answer. Thank you!!!