open-mmlab / mmdeploy

OpenMMLab Model Deployment Framework
https://mmdeploy.readthedocs.io/en/latest/
Apache License 2.0
2.79k stars 637 forks source link

AssertionError: Failed to create TensorRT engine in jetson xavier nx? #1103

Open lijoe123 opened 2 years ago

lijoe123 commented 2 years ago

When i run the demo : python ./tools/deploy.py \ configs/mmdet/detection/detection_tensorrt_dynamic-320x320-1344x1344.py \ $PATH_TO_MMDET/configs/retinanet/retinanet_r18_fpn_1x_coco.py \ retinanet_r18_fpn_1x_coco_20220407_171055-614fd399.pth \ $PATH_TO_MMDET/demo/demo.jpg \ --work-dir work_dir \ --show \ --device cuda:0 \ --dump-info It occur this problem:

[2022-09-26 09:50:35.024] [mmdeploy] [info] [model.cpp:98] Register 'DirectoryModel'
[2022-09-26 09:50:40.964] [mmdeploy] [info] [model.cpp:98] Register 'DirectoryModel'
/home/jetson/mmdetection/mmdet/datasets/utils.py:70: UserWarning: "ImageToTensor" pipeline is replaced by "DefaultFormatBundle" for batch inference. It is recommended to manually replace it in the test data pipeline in your config file.
  'data pipeline in your config file.', UserWarning)
[2022-09-26 09:51:03.577] [mmdeploy] [info] [model.cpp:98] Register 'DirectoryModel'
2022-09-26 09:51:03,594 - mmdeploy - INFO - Start pipeline mmdeploy.apis.pytorch2onnx.torch2onnx in subprocess
load checkpoint from local path: faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth
/home/jetson/mmdetection/mmdet/datasets/utils.py:70: UserWarning: "ImageToTensor" pipeline is replaced by "DefaultFormatBundle" for batch inference. It is recommended to manually replace it in the test data pipeline in your config file.
  'data pipeline in your config file.', UserWarning)
2022-09-26 09:51:27,406 - mmdeploy - WARNING - DeprecationWarning: get_onnx_config will be deprecated in the future.
2022-09-26 09:51:27,407 - mmdeploy - INFO - Export PyTorch model to ONNX: work_dir/end2end.onnx.
2022-09-26 09:51:27,489 - mmdeploy - WARNING - Can not find torch._C._jit_pass_onnx_deduplicate_initializers, function rewrite will not be applied
/home/jetson/mmdeploy/mmdeploy/core/optimizers/function_marker.py:158: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  ys_shape = tuple(int(s) for s in ys.shape)
/home/jetson/mmdetection/mmdet/models/dense_heads/anchor_head.py:123: UserWarning: DeprecationWarning: anchor_generator is deprecated, please use "prior_generator" instead
  warnings.warn('DeprecationWarning: anchor_generator is deprecated, '
/home/jetson/mmdetection/mmdet/core/anchor/anchor_generator.py:333: UserWarning: ``grid_anchors`` would be deprecated soon. Please use ``grid_priors``
  warnings.warn('``grid_anchors`` would be deprecated soon. '
/home/jetson/mmdetection/mmdet/core/anchor/anchor_generator.py:370: UserWarning: ``single_level_grid_anchors`` would be deprecated soon. Please use ``single_level_grid_priors``
  '``single_level_grid_anchors`` would be deprecated soon. '
/home/jetson/mmdeploy/mmdeploy/codebase/mmdet/models/dense_heads/rpn_head.py:78: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert cls_score.size()[-2:] == bbox_pred.size()[-2:]
/home/jetson/mmdeploy/mmdeploy/pytorch/functions/topk.py:57: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if k > size:
/home/jetson/mmdeploy/mmdeploy/codebase/mmdet/core/bbox/delta_xywh_bbox_coder.py:39: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert pred_bboxes.size(0) == bboxes.size(0)
/home/jetson/mmdeploy/mmdeploy/codebase/mmdet/core/bbox/delta_xywh_bbox_coder.py:41: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  assert pred_bboxes.size(1) == bboxes.size(1)
/home/jetson/mmdeploy/mmdeploy/codebase/mmdet/deploy/utils.py:93: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.
  assert len(max_shape) == 2, '`max_shape` should be [h, w]'
/home/jetson/mmdeploy/mmdeploy/codebase/mmdet/core/post_processing/bbox_nms.py:260: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  dets, labels = TRTBatchedNMSop.apply(boxes, scores, int(scores.shape[-1]),
/home/jetson/mmdeploy/mmdeploy/mmcv/ops/nms.py:178: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  out_boxes = min(num_boxes, after_topk)
/home/jetson/mmdeploy/mmdeploy/mmcv/ops/nms.py:181: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  (batch_size, out_boxes)).to(scores.device))
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
/home/jetson/archiconda3/envs/mmdeploy/lib/python3.6/site-packages/torch/onnx/symbolic_opset9.py:2819: UserWarning: Exporting aten::index operator of advanced indexing in opset 11 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.
  "If indices include negative values, the exported graph will produce incorrect results.")
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::MMCVMultiLevelRoiAlign type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::MMCVMultiLevelRoiAlign type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::GatherTopk type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::MMCVMultiLevelRoiAlign type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
2022-09-26 09:51:55,813 - mmdeploy - INFO - Execute onnx optimize passes.
2022-09-26 09:51:57,377 - mmdeploy - INFO - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx
[2022-09-26 09:52:04.968] [mmdeploy] [info] [model.cpp:98] Register 'DirectoryModel'
2022-09-26 09:52:04,985 - mmdeploy - INFO - Start pipeline mmdeploy.backend.tensorrt.onnx2tensorrt.onnx2tensorrt in subprocess
2022-09-26 09:52:05,286 - mmdeploy - INFO - Successfully loaded tensorrt plugins from /home/jetson/mmdeploy/mmdeploy/lib/libmmdeploy_tensorrt_ops.so
[09/26/2022-09:52:06] [TRT] [I] [MemUsageChange] Init CUDA: CPU +355, GPU +0, now: CPU 441, GPU 6225 (MiB)
[09/26/2022-09:52:06] [TRT] [I] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 441 MiB, GPU 6225 MiB
[09/26/2022-09:52:07] [TRT] [I] [MemUsageSnapshot] End constructing builder kernel library: CPU 546 MiB, GPU 6330 MiB
[09/26/2022-09:52:07] [TRT] [W] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[09/26/2022-09:52:07] [TRT] [W] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[09/26/2022-09:52:09] [TRT] [I] No importer registered for op: GatherTopk. Attempting to import as plugin.
[09/26/2022-09:52:09] [TRT] [I] Searching for plugin: GatherTopk, plugin_version: 1, plugin_namespace:
[09/26/2022-09:52:09] [TRT] [I] Successfully created plugin: GatherTopk
[09/26/2022-09:52:09] [TRT] [I] No importer registered for op: GatherTopk. Attempting to import as plugin.
[09/26/2022-09:52:09] [TRT] [I] Searching for plugin: GatherTopk, plugin_version: 1, plugin_namespace:
[09/26/2022-09:52:09] [TRT] [I] Successfully created plugin: GatherTopk
[09/26/2022-09:52:09] [TRT] [I] No importer registered for op: GatherTopk. Attempting to import as plugin.
[09/26/2022-09:52:09] [TRT] [I] Searching for plugin: GatherTopk, plugin_version: 1, plugin_namespace:
[09/26/2022-09:52:09] [TRT] [I] Successfully created plugin: GatherTopk
[09/26/2022-09:52:09] [TRT] [I] No importer registered for op: GatherTopk. Attempting to import as plugin.
[09/26/2022-09:52:09] [TRT] [I] Searching for plugin: GatherTopk, plugin_version: 1, plugin_namespace:
[09/26/2022-09:52:09] [TRT] [I] Successfully created plugin: GatherTopk
[09/26/2022-09:52:09] [TRT] [I] No importer registered for op: GatherTopk. Attempting to import as plugin.
[09/26/2022-09:52:09] [TRT] [I] Searching for plugin: GatherTopk, plugin_version: 1, plugin_namespace:
[09/26/2022-09:52:09] [TRT] [I] Successfully created plugin: GatherTopk
[09/26/2022-09:52:09] [TRT] [I] No importer registered for op: GatherTopk. Attempting to import as plugin.
[09/26/2022-09:52:09] [TRT] [I] Searching for plugin: GatherTopk, plugin_version: 1, plugin_namespace:
[09/26/2022-09:52:09] [TRT] [I] Successfully created plugin: GatherTopk
[09/26/2022-09:52:10] [TRT] [I] No importer registered for op: GatherTopk. Attempting to import as plugin.
[09/26/2022-09:52:10] [TRT] [I] Searching for plugin: GatherTopk, plugin_version: 1, plugin_namespace:
[09/26/2022-09:52:10] [TRT] [I] Successfully created plugin: GatherTopk
[09/26/2022-09:52:10] [TRT] [I] No importer registered for op: GatherTopk. Attempting to import as plugin.
[09/26/2022-09:52:10] [TRT] [I] Searching for plugin: GatherTopk, plugin_version: 1, plugin_namespace:
[09/26/2022-09:52:10] [TRT] [I] Successfully created plugin: GatherTopk
[09/26/2022-09:52:10] [TRT] [I] No importer registered for op: GatherTopk. Attempting to import as plugin.
[09/26/2022-09:52:10] [TRT] [I] Searching for plugin: GatherTopk, plugin_version: 1, plugin_namespace:
[09/26/2022-09:52:10] [TRT] [I] Successfully created plugin: GatherTopk
[09/26/2022-09:52:10] [TRT] [I] No importer registered for op: GatherTopk. Attempting to import as plugin.
[09/26/2022-09:52:10] [TRT] [I] Searching for plugin: GatherTopk, plugin_version: 1, plugin_namespace:
[09/26/2022-09:52:10] [TRT] [I] Successfully created plugin: GatherTopk
[09/26/2022-09:52:11] [TRT] [I] No importer registered for op: TRTBatchedNMS. Attempting to import as plugin.
[09/26/2022-09:52:11] [TRT] [I] Searching for plugin: TRTBatchedNMS, plugin_version: 1, plugin_namespace:
[09/26/2022-09:52:11] [TRT] [I] Successfully created plugin: TRTBatchedNMS
[09/26/2022-09:52:12] [TRT] [I] No importer registered for op: MMCVMultiLevelRoiAlign. Attempting to import as plugin.
[09/26/2022-09:52:12] [TRT] [I] Searching for plugin: MMCVMultiLevelRoiAlign, plugin_version: 1, plugin_namespace:
[09/26/2022-09:52:12] [TRT] [I] Successfully created plugin: MMCVMultiLevelRoiAlign
[09/26/2022-09:52:13] [TRT] [W] Tensor DataType is determined at build time for tensors not marked as input or output.
[09/26/2022-09:52:13] [TRT] [I] No importer registered for op: TRTBatchedNMS. Attempting to import as plugin.
[09/26/2022-09:52:13] [TRT] [I] Searching for plugin: TRTBatchedNMS, plugin_version: 1, plugin_namespace:
[09/26/2022-09:52:13] [TRT] [I] Successfully created plugin: TRTBatchedNMS
[09/26/2022-09:52:14] [TRT] [W] Output type must be INT32 for shape outputs
[09/26/2022-09:52:14] [TRT] [W] DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
[09/26/2022-09:52:14] [TRT] [I] ---------- Layers Running on DLA ----------
[09/26/2022-09:52:14] [TRT] [I] ---------- Layers Running on GPU ----------
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Range_199
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Range_203
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Range_238
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Range_242
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Range_273
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Range_277
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Range_343
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Range_347
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Range_308
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Range_312
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_8 + Relu_9
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] [HostToDeviceCopy]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 211) [Shuffle]_output[Constant]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Mul_204
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_217
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Expand_224
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Tile_225
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 316) [Shuffle]_output[Constant]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Mul_243
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_254
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Expand_261
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Tile_262
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 421) [Shuffle]_output[Constant]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Mul_278
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_289
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Expand_296
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Tile_297
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 947
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 631) [Shuffle]_output[Constant]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Mul_348
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_359
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Expand_366
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Tile_367
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_368 + Unsqueeze_370
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Mul_345
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Unsqueeze_351
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Expand_356
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Tile_357
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Squeeze_358 + Unsqueeze_369
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 940 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 940 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 941 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_372
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Unsqueeze_373
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Add_375
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 886
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 526) [Shuffle]_output[Constant]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Mul_313
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_324
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Expand_331
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Tile_332
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_333 + Unsqueeze_335
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Mul_310
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Unsqueeze_316
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Expand_321
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Tile_322
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Squeeze_323 + Unsqueeze_334
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 879 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 879 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 880 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_337
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Unsqueeze_338
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Add_340
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] MaxPool_10
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_11 + Relu_12
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_341 + Unsqueeze_580
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_376 + Unsqueeze_643
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_298 + Unsqueeze_300
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_263 + Unsqueeze_265
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_227 + Unsqueeze_229
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 703
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Mul_201
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Unsqueeze_208
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Expand_213
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Tile_214
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Squeeze_215 + Unsqueeze_228
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 696 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 696 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 697 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_231
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Unsqueeze_232
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Add_234
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 764
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Mul_240
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Unsqueeze_246
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Expand_251
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Tile_252
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Squeeze_253 + Unsqueeze_264
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 757 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 757 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 758 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_267
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Unsqueeze_268
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Add_270
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 825
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Mul_275
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Unsqueeze_281
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Expand_286
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Tile_287
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Squeeze_288 + Unsqueeze_299
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 818 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 818 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 819 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_302
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Unsqueeze_303
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Add_305
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 1102) [Constant] + (Unnamed Layer* 1103) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ConstantOfShape_604
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 1206) [Constant] + (Unnamed Layer* 1207) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ConstantOfShape_667
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_13 + Relu_14
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1424 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1464 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1312 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1352 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_306 + Unsqueeze_517
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_271 + Unsqueeze_454
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_236 + Unsqueeze_388
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 790) [Constant] + (Unnamed Layer* 791) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ConstantOfShape_413
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 894) [Constant] + (Unnamed Layer* 895) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ConstantOfShape_478
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 998) [Constant] + (Unnamed Layer* 999) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ConstantOfShape_541
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_15
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1200 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1240 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1088 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1128 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 976 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1016 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_16 + Add_17 + Relu_18
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_19 + Relu_20
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_21 + Relu_22
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_23 + Add_24 + Relu_25
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_26 + Relu_27
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_28 + Relu_29
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_30 + Add_31 + Relu_32
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_33 + Relu_34
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_35 + Relu_36
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_37
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_38 + Add_39 + Relu_40
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_41 + Relu_42
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_43 + Relu_44
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_45 + Add_46 + Relu_47
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_48 + Relu_49
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_50 + Relu_51
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_52 + Add_53 + Relu_54
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_55 + Relu_56
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_57 + Relu_58
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_59 + Add_60 + Relu_61
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_62 + Relu_63
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_64 + Relu_65
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_66
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_67 + Add_68 + Relu_69
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_70 + Relu_71
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_72 + Relu_73
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_74 + Add_75 + Relu_76
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_77 + Relu_78
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_79 + Relu_80
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_81 + Add_82 + Relu_83
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_84 + Relu_85
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_86 + Relu_87
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_88 + Add_89 + Relu_90
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_91 + Relu_92
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_93 + Relu_94
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_95 + Add_96 + Relu_97
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_98 + Relu_99
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_100 + Relu_101
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_102 + Add_103 + Relu_104
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_105 + Relu_106
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_107 + Relu_108
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_109
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_110 + Add_111 + Relu_112
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_113 + Relu_114
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_115 + Relu_116
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_117 + Add_118 + Relu_119
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_120 + Relu_121
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_122 + Relu_123
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_124 + Add_125 + Relu_126
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_130
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Resize_138
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_159
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_173 + Relu_174
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] MaxPool_160
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_177 + Relu_178
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_129 + Add_139
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_180 || Conv_179
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Transpose_641 + Reshape_642
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Transpose_637 + Reshape_638
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_176 || Conv_175
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Transpose_578 + Reshape_579
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Transpose_574 + Reshape_575
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Resize_146
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_158
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_169 + Relu_170
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] PWN(Sigmoid_576)
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_577
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 1071) [Constant] + (Unnamed Layer* 1072) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ConstantOfShape_587
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] PWN(989_85 + (Unnamed Layer* 1076) [Shuffle], Add_588)
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 1088) [Constant] + (Unnamed Layer* 1089) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ConstantOfShape_596
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] PWN(Sigmoid_639)
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_640
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 1175) [Constant] + (Unnamed Layer* 1176) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ConstantOfShape_650
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] PWN(989_102 + (Unnamed Layer* 1180) [Shuffle], Add_651)
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 1192) [Constant] + (Unnamed Layer* 1193) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ConstantOfShape_659
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_128 + Add_147
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_172 || Conv_171
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Transpose_515 + Reshape_516
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Transpose_511 + Reshape_512
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Resize_154
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_157
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_165 + Relu_166
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1423 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1451 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1415 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1438 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1311 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1339 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1303 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1326 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] PWN(Sigmoid_513)
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_514
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 967) [Constant] + (Unnamed Layer* 968) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ConstantOfShape_524
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] PWN(989_68 + (Unnamed Layer* 972) [Shuffle], Add_525)
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 984) [Constant] + (Unnamed Layer* 985) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ConstantOfShape_533
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_127 + Add_155
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Squeeze_669
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Squeeze_606
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] TopK_607
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] GatherTopk_611
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] TopK_670
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] GatherTopk_674
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_168 || Conv_167
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Transpose_452 + Reshape_453
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Transpose_448 + Reshape_449
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_156
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_161 + Relu_162
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1199 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1227 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1191 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1214 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] PWN(Sigmoid_450)
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_451
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 863) [Constant] + (Unnamed Layer* 864) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ConstantOfShape_461
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] PWN(989_51 + (Unnamed Layer* 868) [Shuffle], Add_462)
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 880) [Constant] + (Unnamed Layer* 881) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ConstantOfShape_470
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Squeeze_543
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] TopK_544
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] GatherTopk_548
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Conv_164 || Conv_163
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Transpose_385 + Reshape_387
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Transpose_378 + Reshape_381
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1087 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1115 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1079 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1102 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] PWN(Sigmoid_382)
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_384
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 759) [Constant] + (Unnamed Layer* 760) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ConstantOfShape_395
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] PWN(989 + (Unnamed Layer* 764) [Shuffle], Add_397)
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 776) [Constant] + (Unnamed Layer* 777) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ConstantOfShape_405
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Squeeze_480
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] TopK_481
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] GatherTopk_485
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 975 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1003 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 967 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 990 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Squeeze_415
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] TopK_416
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] GatherTopk_421
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1533 + (Unnamed Layer* 1276) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1531 + (Unnamed Layer* 1273) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_422
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_486
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_549
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_612
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_675
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1031 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1143 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1255 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1367 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1479 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_709
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] PWN(Mul_711, Add_713)
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Slice_716
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] PWN(Clip_725)
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] {ForeignNode[Flatten_440...Concat_750]}
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Range_872
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Range_769
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] GatherTopk_620
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] GatherTopk_683
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] GatherTopk_557
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] GatherTopk_494
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] GatherTopk_430
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_431
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_495
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_558
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_621
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_684
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1041 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1153 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1265 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1377 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1489 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] TRTBatchedNMS_755
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Slice_786
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Cast_770
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_772
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Expand_784
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1671 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1676 copy
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_791
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] MMCVMultiLevelRoiAlign_792
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1533_142 + (Unnamed Layer* 1667) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] 1747 + (Unnamed Layer* 1664) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Flatten_793 + (Unnamed Layer* 1554) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Gemm_794 + Relu_795
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Gemm_796 + Relu_797
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] squeeze_after_Relu_797
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 1589) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Gemm_799 || Gemm_798
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 1594) [Shuffle] + Reshape_818 + Reshape_827
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] PWN(Mul_829, Add_830)
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Slice_832
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 1583) [Shuffle] + Reshape_812 + (Unnamed Layer* 1625) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Softmax_819
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] (Unnamed Layer* 1627) [Shuffle]
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Slice_863
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] Reshape_865
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] ArgMax_866
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] PWN(Clip_840)
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] {ForeignNode[1576_146...Reshape_881 + Unsqueeze_882]}
[09/26/2022-09:52:14] [TRT] [I] [GpuLayer] TRTBatchedNMS_883
[09/26/2022-09:52:15] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +226, GPU +225, now: CPU 1099, GPU 6885 (MiB)
[09/26/2022-09:52:15] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[09/26/2022-09:52:15] [TRT] [E] 2: [utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 16}, 12653},)
Process Process-3:
Traceback (most recent call last):
  File "/home/jetson/archiconda3/envs/mmdeploy/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/home/jetson/archiconda3/envs/mmdeploy/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/jetson/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "/home/jetson/mmdeploy/mmdeploy/backend/tensorrt/onnx2tensorrt.py", line 88, in onnx2tensorrt
    device_id=device_id)
  File "/home/jetson/mmdeploy/mmdeploy/backend/tensorrt/utils.py", line 216, in from_onnx
    assert engine is not None, 'Failed to create TensorRT engine'
AssertionError: Failed to create TensorRT engine
2022-09-26 09:52:16,625 - mmdeploy - ERROR - `mmdeploy.backend.tensorrt.onnx2tensorrt.onnx2tensorrt` with Call id: 1 failed. exit.

And i had run the check_env:

2022-09-26 10:05:32,555 - mmdeploy - INFO -

2022-09-26 10:05:32,556 - mmdeploy - INFO - **********Environmental information*                                             *********
fatal: not a git repository (or any of the parent directories): .git
2022-09-26 10:05:37,508 - mmdeploy - INFO - sys.platform: linux
2022-09-26 10:05:37,509 - mmdeploy - INFO - Python: 3.6.15 | packaged by conda-forge | (default, Dec  3 2021, 19:12:04) [GCC 9.4.0]
2022-09-26 10:05:37,509 - mmdeploy - INFO - CUDA available: True
2022-09-26 10:05:37,510 - mmdeploy - INFO - GPU 0: Xavier
2022-09-26 10:05:37,510 - mmdeploy - INFO - CUDA_HOME: /usr/local/cuda-10.2
2022-09-26 10:05:37,510 - mmdeploy - INFO - NVCC: Build cuda_10.2_r440.TC440_70.29663091_0
2022-09-26 10:05:37,510 - mmdeploy - INFO - GCC: gcc (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0
2022-09-26 10:05:37,510 - mmdeploy - INFO - PyTorch: 1.10.0
2022-09-26 10:05:37,510 - mmdeploy - INFO - PyTorch compiling details: PyTorch built with:
  - GCC 7.5
  - C++ Version: 201402
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: NO AVX
  - CUDA Runtime 10.2
  - NVCC architecture flags: -gencode;arch=compute_53,code=sm_53;-gencode;arch=compute_62,code=sm_62;-gencode;arch=compute_72,code=sm_72
  - CuDNN 8.2.1
    - Built with CuDNN 8.0
  - Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=8.0.0, CXX_COMPILER=/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -DMISSING_ARM_VST1 -DMISSING_ARM_VLD1 -Wno-stringop-overflow, FORCE_FALLBACK_CUDA_MPI=1, LAPACK_INFO=open, TORCH_VERSION=1.10.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=ON, USE_NCCL=0, USE_NNPACK=ON, USE_OPENMP=ON,

2022-09-26 10:05:37,511 - mmdeploy - INFO - TorchVision: 0.11.1
2022-09-26 10:05:37,511 - mmdeploy - INFO - OpenCV: 4.6.0
2022-09-26 10:05:37,511 - mmdeploy - INFO - MMCV: 1.4.0
2022-09-26 10:05:37,511 - mmdeploy - INFO - MMCV Compiler: GCC 7.5
2022-09-26 10:05:37,511 - mmdeploy - INFO - MMCV CUDA Compiler: 10.2
2022-09-26 10:05:37,511 - mmdeploy - INFO - MMDeploy: 0.8.0+
2022-09-26 10:05:37,512 - mmdeploy - INFO -

2022-09-26 10:05:37,512 - mmdeploy - INFO - **********Backend information**********
2022-09-26 10:05:39,514 - mmdeploy - INFO - onnxruntime: None   ops_is_avaliable : False
2022-09-26 10:05:39,604 - mmdeploy - INFO - tensorrt: 8.2.1.8   ops_is_avaliable : True
2022-09-26 10:05:39,660 - mmdeploy - INFO - ncnn: None  ops_is_avaliable : False
2022-09-26 10:05:39,665 - mmdeploy - INFO - pplnn_is_avaliable: False
2022-09-26 10:05:39,669 - mmdeploy - INFO - openvino_is_avaliable: False
2022-09-26 10:05:39,722 - mmdeploy - INFO - snpe_is_available: False
2022-09-26 10:05:39,729 - mmdeploy - INFO - ascend_is_available: False
2022-09-26 10:05:39,733 - mmdeploy - INFO - coreml_is_available: False
2022-09-26 10:05:39,733 - mmdeploy - INFO -

2022-09-26 10:05:39,734 - mmdeploy - INFO - **********Codebase information**********
2022-09-26 10:05:39,741 - mmdeploy - INFO - mmdet:      2.25.2
2022-09-26 10:05:39,742 - mmdeploy - INFO - mmseg:      None
2022-09-26 10:05:39,742 - mmdeploy - INFO - mmcls:      None
2022-09-26 10:05:39,742 - mmdeploy - INFO - mmocr:      None
2022-09-26 10:05:39,742 - mmdeploy - INFO - mmedit:     None
2022-09-26 10:05:39,743 - mmdeploy - INFO - mmdet3d:    None
2022-09-26 10:05:39,743 - mmdeploy - INFO - mmpose:     None
2022-09-26 10:05:39,743 - mmdeploy - INFO - mmrotate:   None

After that i run the command accorss the issue #1059 , but i didn't work!!!! /usr/src/tensorrt/bin/trtexec --onnx=./end2end.onnx --plugins=../../mmdeploy/lib/libmmdeploy_tensorrt_ops.so --workspace=6000 --fp16 --saveEngine=end2end.engine it's output

&&&& RUNNING TensorRT.trtexec [TensorRT v8201] # /usr/src/tensorrt/bin/trtexec --onnx=./end2end.onnx --plugins=../../mmdeploy            /lib/libmmdeploy_tensorrt_ops.so --workspace=6000 --fp16 --saveEngine=end2end.engine
[09/26/2022-10:08:31] [I] === Model Options ===
[09/26/2022-10:08:31] [I] Format: ONNX
[09/26/2022-10:08:31] [I] Model: ./end2end.onnx
[09/26/2022-10:08:31] [I] Output:
[09/26/2022-10:08:31] [I] === Build Options ===
[09/26/2022-10:08:31] [I] Max batch: explicit batch
[09/26/2022-10:08:31] [I] Workspace: 6000 MiB
[09/26/2022-10:08:31] [I] minTiming: 1
[09/26/2022-10:08:31] [I] avgTiming: 8
[09/26/2022-10:08:31] [I] Precision: FP32+FP16
[09/26/2022-10:08:31] [I] Calibration:
[09/26/2022-10:08:31] [I] Refit: Disabled
[09/26/2022-10:08:31] [I] Sparsity: Disabled
[09/26/2022-10:08:31] [I] Safe mode: Disabled
[09/26/2022-10:08:31] [I] DirectIO mode: Disabled
[09/26/2022-10:08:31] [I] Restricted mode: Disabled
[09/26/2022-10:08:31] [I] Save engine: end2end.engine
[09/26/2022-10:08:31] [I] Load engine:
[09/26/2022-10:08:31] [I] Profiling verbosity: 0
[09/26/2022-10:08:31] [I] Tactic sources: Using default tactic sources
[09/26/2022-10:08:31] [I] timingCacheMode: local
[09/26/2022-10:08:31] [I] timingCacheFile:
[09/26/2022-10:08:31] [I] Input(s)s format: fp32:CHW
[09/26/2022-10:08:31] [I] Output(s)s format: fp32:CHW
[09/26/2022-10:08:31] [I] Input build shapes: model
[09/26/2022-10:08:31] [I] Input calibration shapes: model
[09/26/2022-10:08:31] [I] === System Options ===
[09/26/2022-10:08:31] [I] Device: 0
[09/26/2022-10:08:31] [I] DLACore:
[09/26/2022-10:08:31] [I] Plugins: ../../mmdeploy/lib/libmmdeploy_tensorrt_ops.so
[09/26/2022-10:08:31] [I] === Inference Options ===
[09/26/2022-10:08:31] [I] Batch: Explicit
[09/26/2022-10:08:31] [I] Input inference shapes: model
[09/26/2022-10:08:31] [I] Iterations: 10
[09/26/2022-10:08:31] [I] Duration: 3s (+ 200ms warm up)
[09/26/2022-10:08:31] [I] Sleep time: 0ms
[09/26/2022-10:08:31] [I] Idle time: 0ms
[09/26/2022-10:08:31] [I] Streams: 1
[09/26/2022-10:08:31] [I] ExposeDMA: Disabled
[09/26/2022-10:08:31] [I] Data transfers: Enabled
[09/26/2022-10:08:31] [I] Spin-wait: Disabled
[09/26/2022-10:08:31] [I] Multithreading: Disabled
[09/26/2022-10:08:31] [I] CUDA Graph: Disabled
[09/26/2022-10:08:31] [I] Separate profiling: Disabled
[09/26/2022-10:08:31] [I] Time Deserialize: Disabled
[09/26/2022-10:08:31] [I] Time Refit: Disabled
[09/26/2022-10:08:31] [I] Skip inference: Disabled
[09/26/2022-10:08:31] [I] Inputs:
[09/26/2022-10:08:31] [I] === Reporting Options ===
[09/26/2022-10:08:31] [I] Verbose: Disabled
[09/26/2022-10:08:31] [I] Averages: 10 inferences
[09/26/2022-10:08:31] [I] Percentile: 99
[09/26/2022-10:08:31] [I] Dump refittable layers:Disabled
[09/26/2022-10:08:31] [I] Dump output: Disabled
[09/26/2022-10:08:31] [I] Profile: Disabled
[09/26/2022-10:08:31] [I] Export timing to JSON file:
[09/26/2022-10:08:31] [I] Export output to JSON file:
[09/26/2022-10:08:31] [I] Export profile to JSON file:
[09/26/2022-10:08:31] [I]
[09/26/2022-10:08:31] [I] === Device Information ===
[09/26/2022-10:08:31] [I] Selected Device: Xavier
[09/26/2022-10:08:31] [I] Compute Capability: 7.2
[09/26/2022-10:08:31] [I] SMs: 6
[09/26/2022-10:08:31] [I] Compute Clock Rate: 1.109 GHz
[09/26/2022-10:08:31] [I] Device Global Memory: 15817 MiB
[09/26/2022-10:08:31] [I] Shared Memory per SM: 96 KiB
[09/26/2022-10:08:31] [I] Memory Bus Width: 256 bits (ECC disabled)
[09/26/2022-10:08:31] [I] Memory Clock Rate: 1.109 GHz
[09/26/2022-10:08:31] [I]
[09/26/2022-10:08:31] [I] TensorRT version: 8.2.1
[09/26/2022-10:08:31] [I] Loading supplied plugin library: ../../mmdeploy/lib/libmmdeploy_tensorrt_ops.so
[09/26/2022-10:08:31] [E] Could not load plugin library: ../../mmdeploy/lib/libmmdeploy_tensorrt_ops.so, due to: ../../mmdepl            oy/lib/libmmdeploy_tensorrt_ops.so: cannot open shared object file: No such file or directory
[09/26/2022-10:08:33] [I] [TRT] [MemUsageChange] Init CUDA: CPU +362, GPU +0, now: CPU 381, GPU 5787 (MiB)
[09/26/2022-10:08:33] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 381 MiB, GPU 5787 MiB
[09/26/2022-10:08:33] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 486 MiB, GPU 5893 MiB
[09/26/2022-10:08:33] [I] Start parsing network model
Could not open file ./end2end.onnx
Could not open file ./end2end.onnx
[09/26/2022-10:08:33] [E] [TRT] ModelImporter.cpp:735: Failed to parse ONNX model from file: ./end2end.onnx
[09/26/2022-10:08:33] [E] Failed to parse onnx file
[09/26/2022-10:08:33] [I] Finish parsing network model
[09/26/2022-10:08:33] [E] Parsing model failed
[09/26/2022-10:08:33] [E] Failed to create engine from model.
[09/26/2022-10:08:33] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8201] # /usr/src/tensorrt/bin/trtexec --onnx=./end2end.onnx --plugins=../../mmdeploy/            lib/libmmdeploy_tensorrt_ops.so --workspace=6000 --fp16 --saveEngine=end2end.engine

I'm looking forward to your answer. Thank you!!!

AllentDan commented 2 years ago

@lakshanthad. Hi, I saw your pulling request #484 to MMDeploy, which added NX to the support device list. Could you please give some help if possible?

AllentDan commented 2 years ago

Hi, I saw your issue https://github.com/open-mmlab/mmdeploy/issues/1063. For Jetpack 5.0.2, since there is no way to downgrade PyTorch, maybe you could try use MMCV with version >1.4.0. We did not test the latest MMCV on Jetsons though.

lijoe123 commented 2 years ago

Hi, I saw your issue #1063. For Jetpack 5.0.2, since there is no way to downgrade PyTorch, maybe you could try use MMCV with version >1.4.0. We did not test the latest MMCV on Jetsons though.

Ok, thank you for your answer! But had not solve the problem above.

AllentDan commented 2 years ago

Yeah, looking forward to the reply from @lakshanthad

lijoe123 commented 2 years ago

Yeah, looking forward to the reply from @lakshanthad

/(ㄒoㄒ)/~~,waiting for this guy!!!!!!!

AllentDan commented 2 years ago

Hi, @lijoe123 . Sorry that @lakshanthad did not respond to the issue. The device xavier nx in #484 might be a mistake that is actually not verified. We will remove the device in the doc.