NVIDIA-AI-IOT / cuDLA-samples

YOLOv5 on Orin DLA
Other
180 stars 17 forks source link

/model.0/conv/_input_quantizer/Constant_1_output_0' is not supported on DLA. #19

Closed WangFengtu1996 closed 1 month ago

WangFengtu1996 commented 8 months ago
  1. follow export README.md, Option1, QAT -> PTQ, then try to serialize onnx model to generate engine file.
(py310) orin@orin-root:~/workspace/cuDLA-samples$ bash data/model/build_dla_standalone_loadable_v2_dla1.sh
Build DLA loadable for fp16 and int8
&&&& RUNNING TensorRT.trtexec [TensorRT v8502] # /usr/src/tensorrt/bin/trtexec --minShapes=images:1x3x640x640 --maxShapes=images:1x3x640x640 --optShapes=images:1x3x640x640 --shapes=images:1x3x640x640 --onnx=data/model/yolov5_trimmed_qat_1_25.onnx --buildDLAStandalone --useDLACore=1 --saveEngine=data/loadable/yolov5.int8.int8hwc4in.fp16chw16out.standalone.dla1.bin --inputIOFormats=int8:dla_hwc4 --outputIOFormats=fp16:chw16 --int8 --fp16 --calib=data/model/yolov5_trimmed_qat_1_25_precision_config_calib.cache --precisionConstraints=prefer --layerPrecisions=/model.24/m.0/Conv:fp16,/model.24/m.1/Conv:fp16,/model.24/m.2/Conv:fp16,/model.23/cv3/conv/Conv:fp16,/model.23/cv3/act/Sigmoid:fp16,/model.23/cv3/act/Mul:fp16
[01/26/2024-13:45:17] [I] === Model Options ===
[01/26/2024-13:45:17] [I] Format: ONNX
[01/26/2024-13:45:17] [I] Model: data/model/yolov5_trimmed_qat_1_25.onnx
[01/26/2024-13:45:17] [I] Output:
[01/26/2024-13:45:17] [I] === Build Options ===
[01/26/2024-13:45:17] [I] Max batch: explicit batch
[01/26/2024-13:45:17] [I] Memory Pools: workspace: default, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default
[01/26/2024-13:45:17] [I] minTiming: 1
[01/26/2024-13:45:17] [I] avgTiming: 8
[01/26/2024-13:45:17] [I] Precision: FP32+FP16+INT8 (prefer precision constraints)
[01/26/2024-13:45:17] [I] LayerPrecisions: /model.23/cv3/act/Mul:fp16,/model.23/cv3/conv/Conv:fp16,/model.24/m.2/Conv:fp16,/model.23/cv3/act/Sigmoid:fp16,/model.24/m.1/Conv:fp16,/model.24/m.0/Conv:fp16
[01/26/2024-13:45:17] [I] Calibration: data/model/yolov5_trimmed_qat_1_25_precision_config_calib.cache
[01/26/2024-13:45:17] [I] Refit: Disabled
[01/26/2024-13:45:17] [I] Sparsity: Disabled
[01/26/2024-13:45:17] [I] Safe mode: Disabled
[01/26/2024-13:45:17] [I] DirectIO mode: Disabled
[01/26/2024-13:45:17] [I] Restricted mode: Disabled
[01/26/2024-13:45:17] [I] Build only: Enabled
[01/26/2024-13:45:17] [I] Save engine: data/loadable/yolov5.int8.int8hwc4in.fp16chw16out.standalone.dla1.bin
[01/26/2024-13:45:17] [I] Load engine:
[01/26/2024-13:45:17] [I] Profiling verbosity: 0
[01/26/2024-13:45:17] [I] Tactic sources: Using default tactic sources
[01/26/2024-13:45:17] [I] timingCacheMode: local
[01/26/2024-13:45:17] [I] timingCacheFile:
[01/26/2024-13:45:17] [I] Heuristic: Disabled
[01/26/2024-13:45:17] [I] Preview Features: Use default preview flags.
[01/26/2024-13:45:17] [I] Input(s): int8:+dla_hwc4
[01/26/2024-13:45:17] [I] Output(s): fp16:+chw16
[01/26/2024-13:45:17] [I] Input build shape: images=1x3x640x640+1x3x640x640+1x3x640x640
[01/26/2024-13:45:17] [I] Input calibration shape: images=1x3x640x640+1x3x640x640+1x3x640x640
[01/26/2024-13:45:17] [I] === System Options ===
[01/26/2024-13:45:17] [I] Device: 0
[01/26/2024-13:45:17] [I] DLACore: 1
[01/26/2024-13:45:17] [I] Plugins:
[01/26/2024-13:45:17] [I] === Inference Options ===
[01/26/2024-13:45:17] [I] Batch: Explicit
[01/26/2024-13:45:17] [I] Input inference shape: images=1x3x640x640
[01/26/2024-13:45:17] [I] Iterations: 10
[01/26/2024-13:45:17] [I] Duration: 3s (+ 200ms warm up)
[01/26/2024-13:45:17] [I] Sleep time: 0ms
[01/26/2024-13:45:17] [I] Idle time: 0ms
[01/26/2024-13:45:17] [I] Streams: 1
[01/26/2024-13:45:17] [I] ExposeDMA: Disabled
[01/26/2024-13:45:17] [I] Data transfers: Enabled
[01/26/2024-13:45:17] [I] Spin-wait: Disabled
[01/26/2024-13:45:17] [I] Multithreading: Disabled
[01/26/2024-13:45:17] [I] CUDA Graph: Disabled
[01/26/2024-13:45:17] [I] Separate profiling: Disabled
[01/26/2024-13:45:17] [I] Time Deserialize: Disabled
[01/26/2024-13:45:17] [I] Time Refit: Disabled
[01/26/2024-13:45:17] [I] NVTX verbosity: 0
[01/26/2024-13:45:17] [I] Persistent Cache Ratio: 0
[01/26/2024-13:45:17] [I] Inputs:
[01/26/2024-13:45:17] [I] === Reporting Options ===
[01/26/2024-13:45:17] [I] Verbose: Disabled
[01/26/2024-13:45:17] [I] Averages: 10 inferences
[01/26/2024-13:45:17] [I] Percentiles: 90,95,99
[01/26/2024-13:45:17] [I] Dump refittable layers:Disabled
[01/26/2024-13:45:17] [I] Dump output: Disabled
[01/26/2024-13:45:17] [I] Profile: Disabled
[01/26/2024-13:45:17] [I] Export timing to JSON file:
[01/26/2024-13:45:17] [I] Export output to JSON file:
[01/26/2024-13:45:17] [I] Export profile to JSON file:
[01/26/2024-13:45:17] [I]
[01/26/2024-13:45:17] [I] === Device Information ===
[01/26/2024-13:45:17] [I] Selected Device: Orin
[01/26/2024-13:45:17] [I] Compute Capability: 8.7
[01/26/2024-13:45:17] [I] SMs: 16
[01/26/2024-13:45:17] [I] Compute Clock Rate: 1.3 GHz
[01/26/2024-13:45:17] [I] Device Global Memory: 62800 MiB
[01/26/2024-13:45:17] [I] Shared Memory per SM: 164 KiB
[01/26/2024-13:45:17] [I] Memory Bus Width: 256 bits (ECC disabled)
[01/26/2024-13:45:17] [I] Memory Clock Rate: 1.3 GHz
[01/26/2024-13:45:17] [I]
[01/26/2024-13:45:17] [I] TensorRT version: 8.5.2
[01/26/2024-13:45:17] [I] [TRT] [MemUsageChange] Init CUDA: CPU +220, GPU +0, now: CPU 249, GPU 16692 (MiB)
[01/26/2024-13:45:20] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +302, GPU +406, now: CPU 574, GPU 17119 (MiB)
[01/26/2024-13:45:20] [I] Start parsing network model
[01/26/2024-13:45:20] [I] [TRT] ----------------------------------------------------------------
[01/26/2024-13:45:20] [I] [TRT] Input filename:   data/model/yolov5_trimmed_qat_1_25.onnx
[01/26/2024-13:45:20] [I] [TRT] ONNX IR version:  0.0.7
[01/26/2024-13:45:20] [I] [TRT] Opset version:    13
[01/26/2024-13:45:20] [I] [TRT] Producer name:    pytorch
[01/26/2024-13:45:20] [I] [TRT] Producer version: 2.1.2
[01/26/2024-13:45:20] [I] [TRT] Domain:
[01/26/2024-13:45:20] [I] [TRT] Model version:    0
[01/26/2024-13:45:20] [I] [TRT] Doc string:
[01/26/2024-13:45:20] [I] [TRT] ----------------------------------------------------------------
[01/26/2024-13:45:21] [I] Finish parsing network model
[01/26/2024-13:45:21] [E] Error[4]: DLA Standalone is enabled but layer: '/model.0/conv/_input_quantizer/Constant_1_output_0' is not supported on DLA.
[01/26/2024-13:45:21] [E] Error[4]: [network.cpp::validate::2789] Error Code 4: Internal Error (DLA validation failed)
[01/26/2024-13:45:21] [E] Error[2]: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
[01/26/2024-13:45:21] [E] Engine could not be created from network
[01/26/2024-13:45:21] [E] Building engine failed
[01/26/2024-13:45:21] [E] Failed to create engine from model or file.
[01/26/2024-13:45:21] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8502] # /usr/src/tensorrt/bin/trtexec --minShapes=images:1x3x640x640 --maxShapes=images:1x3x640x640 --optShapes=images:1x3x640x640 --shapes=images:1x3x640x640 --onnx=data/model/yolov5_trimmed_qat_1_25.onnx --buildDLAStandalone --useDLACore=1 --saveEngine=data/loadable/yolov5.int8.int8hwc4in.fp16chw16out.standalone.dla1.bin --inputIOFormats=int8:dla_hwc4 --outputIOFormats=fp16:chw16 --int8 --fp16 --calib=data/model/yolov5_trimmed_qat_1_25_precision_config_calib.cache --precisionConstraints=prefer --layerPrecisions=/model.24/m.0/Conv:fp16,/model.24/m.1/Conv:fp16,/model.24/m.2/Conv:fp16,/model.23/cv3/conv/Conv:fp16,/model.23/cv3/act/Sigmoid:fp16,/model.23/cv3/act/Mul:fp16
liuanqi-libra7 commented 8 months ago

Hi

I checked your log. This appears to be a model with a quantize-dequantize (qdq) node. Error[4]: DLA Standalone is enabled but layer: '/model.0/conv/**_input_quantize**r/Constant_1_output_0' is not supported on DLA. You should configure the model without a quantize-dequantize (qdq) node in build_dla_standalone_loadable_v2_dla1.sh file.

Thanks!

WangFengtu1996 commented 7 months ago

e, I follow the readme operation.

lynettez commented 1 month ago

closing since no activity for several months, thanks all!