dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.75k stars 2.97k forks source link

onnx to engine with int8 failed #1754

Open 7788zun opened 10 months ago

7788zun commented 10 months ago

I want to transform an onnx model to engine with int8, but when i use the engine model to predict a video, the error happened, please help me

(base) root@autodl-container-408811a93c-192be558:~/autodl-tmp/testlzy/test_on_class# python demo.py WARNING ⚠️ Unable to automatically guess model task, assuming 'task=detect'. Explicitly define task for your model, i.e. 'task=detect', 'segment', 'classify', or 'pose'. Loading /root/autodl-tmp/testlzy/test_on_class/best.engine for TensorRT inference... [11/07/2023-10:18:10] [TRT] [I] [MemUsageChange] Init CUDA: CPU +328, GPU +0, now: CPU 771, GPU 437 (MiB) Traceback (most recent call last): File "demo.py", line 18, in results = model(frame) File "/root/miniconda3/lib/python3.8/site-packages/ultralytics/engine/model.py", line 95, in call return self.predict(source, stream, *kwargs) File "/root/miniconda3/lib/python3.8/site-packages/ultralytics/engine/model.py", line 228, in predict self.predictor.setup_model(model=self.model, verbose=is_cli) File "/root/miniconda3/lib/python3.8/site-packages/ultralytics/engine/predictor.py", line 305, in setup_model self.model = AutoBackend(model or self.args.model, File "/root/miniconda3/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(args, **kwargs) File "/root/miniconda3/lib/python3.8/site-packages/ultralytics/nn/autobackend.py", line 174, in init metadata = json.loads(f.read(meta_len).decode('utf-8')) # read metadata UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd5 in position 4: invalid continuation byte