Closed chyao7 closed 1 year ago
@chyao7 Hi, could post here which model config and checkpoint you are using?
This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response.
This issue is closed because it has been stale for 5 days. Please open a new issue if you have similar issues or you have any new updates now.
Checklist
Describe the bug
Writing Calibration Cache for calibrator: TRT-8204-EntropyCalibration2 [12/29/2022-10:43:19] [TRT] [W] Missing scale and zero-point for tensor (Unnamed Layer 124) [Shuffle]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor [12/29/2022-10:43:19] [TRT] [W] Missing scale and zero-point for tensor (Unnamed Layer 125) [Softmax]_output, expect fall back to non-int8 implementation for any layer consuming or producing given tensor [12/29/2022-10:43:21] [TRT] [W] TensorRT was linked against cuBLAS/cuBLASLt 11.6.5 but loaded cuBLAS/cuBLASLt 11.5.1 [12/29/2022-10:43:21] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 1939, GPU 991 (MiB) [12/29/2022-10:43:21] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 1939, GPU 999 (MiB) [12/29/2022-10:43:21] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored. [12/29/2022-10:43:24] [TRT] [E] 1: Unexpected exception None 1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 Process Process-4: Traceback (most recent call last): File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, *self._kwargs) File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in call ret = func(args, **kwargs) File "/root/workspace/mmdeploy/mmdeploy/apis/utils/utils.py", line 95, in to_backend return backend_mgr.to_backend( File "/root/workspace/mmdeploy/mmdeploy/backend/tensorrt/backend_manager.py", line 129, in to_backend onnx2tensorrt( File "/root/workspace/mmdeploy/mmdeploy/backend/tensorrt/onnx2tensorrt.py", line 79, in onnx2tensorrt from_onnx( File "/root/workspace/mmdeploy/mmdeploy/backend/tensorrt/utils.py", line 234, in from_onnx assert engine is not None, 'Failed to create TensorRT engine' AssertionError: Failed to create TensorRT engine 2022-12-29 10:43:26,369 - mmdeploy - ERROR -
mmdeploy.apis.utils.utils.to_backend
with Call id: 2Reproduction
deploy convert resnet50 int8
Environment
Error traceback
No response