NVIDIA-AI-IOT / torch2trt

An easy to use PyTorch to TensorRT converter
MIT License
4.56k stars 673 forks source link

Int8 model inference problem #528

Open zhaowujin opened 3 years ago

zhaowujin commented 3 years ago

I transfer int8 model successfully, but get error when inference.

Error as flow: self.model_trt.load_state_dict(torch.load(model_path)) File "/usr/local/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 832, in load_state_dict load(self) File "/usr/local/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 827, in load state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs) File "/usr/local/lib64/python3.6/site-packages/torch2trt-0.1.0-py3.6-linux-x86_64.egg/torch2trt/torch2trt.py", line 443, in _load_from_state_dict self.context = self.engine.create_execution_context() AttributeError: 'NoneType' object has no attribute 'create_execution_context'

Load model code: self.model_trt = TRTModule() self.model_trt.load_state_dict(torch.load(model_path))

Transfer code: self.model_trt = torch2trt(self.model, [data_set[0]], int8_mode=True, max_batch_size=100, int8_calib_dataset=data_set) torch.save(self.model_trt.state_dict(), '10_trt_int8_1000.pth')

wang-TJ-20 commented 2 years ago

@zhaowujin 您好,请问这个问题您解决了吗

hello , have you solved this question