Deci-AI / super-gradients

Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS.
https://www.supergradients.com
Apache License 2.0
4.54k stars 496 forks source link

Why do PTQ and QTA automatically output different ONNX shapes? #1867

Closed BlueRayi closed 5 months ago

BlueRayi commented 7 months ago

đź’ˇ Your Question

Regarding the Quantization-Aware training for YOLO-NAS, I followed the procedure here, but it seems that the input and output format is different between the ONNX output at PTQ and the ONNX output after QTA.

The shape of the ONNX at PTQ seems to match the input and output of the following code, which is the “Batch Format” referred to in this document, but the ONNX output after QTA has a completely different output shape.

from super_gradients.common.object_names import Models
from super_gradients.training import models

model = models.get(Models.YOLO_NAS_S, pretrained_weights="coco")

export_result = model.export("yolo_nas_s.onnx")

The following images show the properties of each ONNX as confirmed by NETRON: the first is PTQ and the second is QTA.

PTQ ONNX propaties, 4 outputs are included, int64[1, 1], float32[1, N, M], float32[1, N], int64[1, N]

QAT ONNX propaties, 2 outputs are included, float32[16, 8400, 4], float32[16, 8400, 3]

I would appreciate an answer as to why the two ONNX formats are different and how to use the ONNX output after QTA to infer the image.

Thank you.

Versions

PyTorch version: 2.2.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.31

Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29 Is CUDA available: True CUDA runtime version: 11.3.109 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090 Nvidia driver version: 545.29.06 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 39 bits physical, 48 bits virtual CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 1 Core(s) per socket: 24 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 183 Model name: 13th Gen Intel(R) Core(TM) i9-13900KF Stepping: 1 CPU MHz: 3000.000 CPU max MHz: 5800.0000 CPU min MHz: 800.0000 BogoMIPS: 5990.40 Virtualization: VT-x L1d cache: 576 KiB L1i cache: 384 KiB L2 cache: 24 MiB NUMA node0 CPU(s): 0-31 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr flush_l1d arch_capabilities

Versions of relevant libraries: [pip3] numpy==1.23.0 [pip3] onnx==1.13.0 [pip3] onnx-graphsurgeon==0.3.27 [pip3] onnx-simplifier==0.4.35 [pip3] onnxruntime==1.13.1 [pip3] onnxsim==0.4.35 [pip3] pytorch-quantization==2.1.2 [pip3] torch==2.2.0 [pip3] torchmetrics==0.8.0 [pip3] torchvision==0.17.0 [pip3] triton==2.2.0 [conda] Could not collect

BloodAxe commented 7 months ago

This is because PTQ can be done solely using model.export call: model.export(..., quantization_mode=INT8, calibration_loader=...) So during export operation you can attach a postprocessing (NMS) to model which outputs decoded boxes.

As for QAT, we are using Trainer and did not fully integrated our new export() API there. So we have limited options to control the export of the QAT-ed model, which is exported without postprocessing.

So currently there is no option to export QAT model with postprocessing. This is a good improvement tho.

BlueRayi commented 7 months ago

Thank you for your response.

So you are saying that at this time QAT only has a benchmark function? Or is there another way for QAT to perform object detection on an image or video and check the results other than train_from_recipe?

shaydeci commented 5 months ago

@BlueRayi I am closing this issue down as since https://github.com/Deci-AI/super-gradients/pull/1879 both QAT and PTQ use the dedicated model's .export() method, and shapes should be the same.