openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
7.01k stars 2.21k forks source link

[Performance]: INT8 hifiGAN quantized by NNCF inference is much slower than bf16 with OpenVINO in CPU #25197

Open SakuraYM opened 3 months ago

SakuraYM commented 3 months ago

OpenVINO Version

2024.0.0

Operating System

Ubuntu 22.04 (LTS)

Device used for inference

CPU

OpenVINO installation

PyPi

Programming Language

Python

Hardware Architecture

x86 (64 bits)

Model used

hifiGAN vocoder

Model quantization

Yes

Target Platform

Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 52 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 256 On-line CPU(s) list: 0-255 Vendor ID: GenuineIntel Model name: INTEL(R) XEON(R) PLATINUM 8592+ CPU family: 6 Model: 207 Thread(s) per core: 2 Core(s) per socket: 64 Socket(s): 2 Stepping: 2 CPU max MHz: 3900.0000 CPU min MHz: 800.0000 BogoMIPS: 3800.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxs r sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art archperfmon pebs bts rep good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_knownfreq pni pclmulqdq dtes64 monitor ds cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc _deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l 2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flex priority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl x saveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detec t avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req v nmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bital g tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clea r serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flushl1d arch capabilities Virtualization features: Virtualization: VT-x Caches (sum of all): L1d: 6 MiB (128 instances) L1i: 4 MiB (128 instances) L2: 256 MiB (128 instances) L3: 640 MiB (2 instances) NUMA: NUMA node(s): 2 NUMA node0 CPU(s): 0-63,128-191 NUMA node1 CPU(s): 64-127,192-255 Vulnerabilities: Gather data sampling: Not affected Itlb multihit: Not affected L1tf: Not affected Mds: Not affected Meltdown: Not affected Mmio stale data: Not affected Retbleed: Not affected Spec rstack overflow: Not affected Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; B HI BHI_DIS_S Srbds: Not affected Tsx async abort: Not affected

Performance issue description

benchmark_app shows the INT8 quantized hifiGAN model inference is much slower than BF16 AMX_BF16 image AMX_INT8 image

The same thing also occurs during the model compilation phase.....

Step-by-step reproduction

This is nncf code for hifigan

nncf_hifigan.txt

Issue submission checklist

rkazants commented 3 months ago

@MaximProshin, @AlexKoff88, please take a look at this issue.

MaximProshin commented 3 months ago

This is performance issue. As far as I understand the model was quantized with default settings and Mixed preset. CPU plugin should investigate it. @wenjiew @dmitry-gorokhov can someone from your side check why int8 performance is so bad?

MaximProshin commented 3 months ago

@SakuraYM , did you validate the model after the quantization? Is it accurate?

SakuraYM commented 3 months ago

did you validate the model after the quantization? Is it accurate?

No, we use dummy input to quantize hifigan and just want to estimate the best performance improvement

dmitry-gorokhov commented 3 months ago

@SakuraYM May I ask you to attach both original and quantized IRs to this issue?

AlexKoff88 commented 3 months ago

I also noticed that you used 259 and 11 iterations of benchmarking for BF16 and INT8 models correspondingly. I think it is also worth looking at how this model is quantized from the NNCF perspective.

SakuraYM commented 3 months ago

@SakuraYM May I ask you to attach both original and quantized IRs to this issue?

of course, after data masking I‘ll upload the model. :)

SakuraYM commented 3 months ago

I also noticed that you used 259 and 11 iterations of benchmarking for BF16 and INT8 models correspondingly. I think it is also worth looking at how this model is quantized from the NNCF perspective.

Yes, because it used the benchmark's default configuration, which only runs for 1 minute to collect data

SakuraYM commented 3 months ago

hifigan_bf16.log hifigan_i8.log The attachment are the benchmark_app log to provide details for analysis.

dmitry-gorokhov commented 3 months ago

hifigan_bf16.log hifigan_i8.log The attachment are the benchmark_app log to provide details for analysis.

Based on these logs I see int8 Convolutions work dramatically slow for some reason. @SakuraYM Could you please do the same benchmark_app runs but with DNNL_VERBOSE=1 env variable enabled and share the logs?

SakuraYM commented 3 months ago

hifigan_bf16.log hifigan_i8.log The attachment are the benchmark_app log to provide details for analysis.

Based on these logs I see int8 Convolutions work dramatically slow for some reason. @SakuraYM Could you please do the same benchmark_app runs but with DNNL_VERBOSE=1 env variable enabled and share the logs?

hifigan_bf16_dnn.log hifigan_int8_dnn.log

SakuraYM commented 3 months ago

@SakuraYM May I ask you to attach both original and quantized IRs to this issue?

of course, after data masking I‘ll upload the model. :)

@dmitry-gorokhov hi, the models are too big to upload.... Is there a better way to share or just contact Yu, Meng in teams and I can send it to you directly