intel / intel-extension-for-pytorch

A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
Apache License 2.0
1.61k stars 247 forks source link

ipex.optimize -> linear_prepack throws an "RuntimeError: could not create a primitive descriptor" error #375

Open TheMrCodes opened 1 year ago

TheMrCodes commented 1 year ago

Describe the issue

I tried to run an LLM on my Arc 770 16gb (with a i5-13500T cpu) and stumbled upon this error. From the stacktrace and the error message alone I'm not totally sure if this is really an library error or something else.

I would appreciate guidance to fix or further debug this error.

Error - Script Output:

No CUDA runtime is found, using CUDA_HOME='/usr'
Starting to load the model TheBloke/minotaur-15B-GPTQ into memory
2023-06-20 00:07:54,739 - auto_gptq.modeling._base - INFO - lm_head not been quantized, will be ignored when make_quant.
2023-06-20 00:07:54,740 - auto_gptq.nn_modules.qlinear_old - WARNING - CUDA extension not installed.
2023-06-20 00:07:55,338 - accelerate.utils.modeling - WARNING - The safetensors archive passed at /home/themrcodes/.cache/huggingface/hub/models--TheBloke--minotaur-15B-GPTQ/snapshots/0a9aa2806875b25e3f8da550c3ab4173ad437f18/gptq_model-4bit-128g.safetensors does not contain metadata. Make sure to save your model with the `save_pretrained` method. Defaulting to 'pt' metadata.
2023-06-20 00:08:12,099 - auto_gptq.modeling._base - WARNING - GPTBigCodeGPTQForCausalLM hasn't fused attention module yet, will skip inject fused attention.
2023-06-20 00:08:12,099 - auto_gptq.modeling._base - WARNING - GPTBigCodeGPTQForCausalLM hasn't fused mlp module yet, will skip inject fused mlp.
/home/themrcodes/repo/python/intel-arc/.venv/lib/python3.10/site-packages/intel_extension_for_pytorch/frontend.py:522: UserWarning: Conv BatchNorm folding failed during the optimize process.
  warnings.warn("Conv BatchNorm folding failed during the optimize process.")
/home/themrcodes/repo/python/intel-arc/.venv/lib/python3.10/site-packages/intel_extension_for_pytorch/frontend.py:527: UserWarning: Linear BatchNorm folding failed during the optimize process.
  warnings.warn("Linear BatchNorm folding failed during the optimize process.")
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[1], line 23
     13 model = AutoGPTQForCausalLM.from_quantized(
     14     model_name_or_path,
     15     model_basename=model_basename,
   (...)
     20     offload_folder="offload",
     21 )
     22 model.training = False
---> 23 model = ipex.optimize(model)
     25 tok = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
     26 tok.bos_token_id = 1

File ~/repo/python/intel-arc/.venv/lib/python3.10/site-packages/intel_extension_for_pytorch/frontend.py:583, in optimize(model, dtype, optimizer, level, inplace, conv_bn_folding, linear_bn_folding, weights_prepack, replace_dropout_with_identity, optimize_lstm, split_master_weight_for_bf16, fuse_update_step, auto_kernel_selection, sample_input, graph_mode)
    579             if dtype == torch.half:
    580                 assert core.onednn_has_fp16_support(), \
    581                     "FP16 weight prepack needs the cpu support avx512_core_fp16, " + \
    582                     "please set dtype to torch.float or set weights_prepack to False."
--> 583             optimized_model, optimized_optimizer, params_attr = utils._weight_prepack.weight_prepack_with_ipex(
    584                 optimized_model, optimized_optimizer, params_attr, 'cpu')
    586 if opt_properties.graph_mode:
    587     _old_forward = optimized_model.forward

File ~/repo/python/intel-arc/.venv/lib/python3.10/site-packages/intel_extension_for_pytorch/nn/utils/_weight_prepack.py:402, in weight_prepack_with_ipex(module, optimizer, params_attr, device_type)
    399     return new_m, optimizer, params_attr
    401 if device_type == 'cpu':
--> 402     opt_model, opt_optmizer, params_attr = convert_rec(module, optimizer, params_attr)
    403     if opt_optmizer is not None:
    404         setattr(opt_optmizer, 'params_attr', params_attr) # noqa B010

File ~/repo/python/intel-arc/.venv/lib/python3.10/site-packages/intel_extension_for_pytorch/nn/utils/_weight_prepack.py:398, in weight_prepack_with_ipex.<locals>.convert_rec(m, optimizer, params_attr)
    396 new_m = convert(m, optimizer, params_attr)
    397 for name, sub_m in m.named_children():
--> 398     setattr(new_m, name, convert_rec(sub_m, optimizer, params_attr)[0])
    399 return new_m, optimizer, params_attr

File ~/repo/python/intel-arc/.venv/lib/python3.10/site-packages/intel_extension_for_pytorch/nn/utils/_weight_prepack.py:398, in weight_prepack_with_ipex.<locals>.convert_rec(m, optimizer, params_attr)
    396 new_m = convert(m, optimizer, params_attr)
    397 for name, sub_m in m.named_children():
--> 398     setattr(new_m, name, convert_rec(sub_m, optimizer, params_attr)[0])
    399 return new_m, optimizer, params_attr

File ~/repo/python/intel-arc/.venv/lib/python3.10/site-packages/intel_extension_for_pytorch/nn/utils/_weight_prepack.py:396, in weight_prepack_with_ipex.<locals>.convert_rec(m, optimizer, params_attr)
    395 def convert_rec(m, optimizer, params_attr):
--> 396     new_m = convert(m, optimizer, params_attr)
    397     for name, sub_m in m.named_children():
    398         setattr(new_m, name, convert_rec(sub_m, optimizer, params_attr)[0])

File ~/repo/python/intel-arc/.venv/lib/python3.10/site-packages/intel_extension_for_pytorch/nn/utils/_weight_prepack.py:345, in weight_prepack_with_ipex.<locals>.convert(m, optimizer, params_attr)
    343 if type(m) is torch.nn.Linear:
    344     if m.weight.dtype == torch.half:
--> 345         new_m = IPEX_WEIGHT_PREPACK_MODULE_CPU[type(m)](m, use_dnnl=True)
    346     elif m.weight.dtype == torch.float32 and optimizer is None \
    347         and frontend.get_fp32_math_mode(device="cpu") == frontend.FP32MathMode.FP32 \
    348             and not _using_dnnl():
    349         new_m = IPEX_WEIGHT_PREPACK_MODULE_CPU[type(m)](m, use_dnnl=False)

File ~/repo/python/intel-arc/.venv/lib/python3.10/site-packages/intel_extension_for_pytorch/nn/utils/_weight_prepack.py:171, in _IPEXLinear.__init__(self, dense_module, use_dnnl)
    169 # create linear op context
    170 if self.use_dnnl:
--> 171     self.ctx = torch.ops.ipex_prepack.linear_prepack(dense_module.weight,
    172                                                      self.bias, self.batch_size_collapsed)
    173 else:
    174     self.ctx = torch.ops.ipex_prepack.mkl_sgemm_prepack(dense_module.weight,
    175                                                         self.bias, self.batch_size_collapsed)

File ~/repo/python/intel-arc/.venv/lib/python3.10/site-packages/torch/_ops.py:442, in OpOverloadPacket.__call__(self, *args, **kwargs)
    437 def __call__(self, *args, **kwargs):
    438     # overloading __call__ to ensure torch.ops.foo.bar()
    439     # is still callable from JIT
    440     # We save the function ptr as the `op` attribute on
    441     # OpOverloadPacket to access it here.
--> 442     return self._op(*args, **kwargs or {})

RuntimeError: could not create a primitive descriptor

Executed code and its dependencies:

Dependencies:

pip install -q -U torch
pip install -q -U git+https://github.com/huggingface/transformers.git
pip install -q -U git+https://github.com/huggingface/peft.git
pip install -q -U git+https://github.com/huggingface/accelerate.git
pip install -q -U gradio
pip install -q -U sentencepiece
pip install -q -U auto-gptq

Code:

import intel_extension_for_pytorch as ipex

from transformers import AutoTokenizer, pipeline, logging, LlamaTokenizer, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_name_or_path = "TheBloke/minotaur-15B-GPTQ"
model_basename = "gptq_model-4bit-128g"
use_triton = False

print(f"Starting to load the model {model_name_or_path} into memory")

model = AutoGPTQForCausalLM.from_quantized(
    model_name_or_path,
    model_basename=model_basename,
    use_safetensors=True,
    trust_remote_code=False,
    use_triton=use_triton,
    quantize_config=None,
    offload_folder="offload",
)
model.training = False
model = ipex.optimize(model)

tok = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
tok.bos_token_id = 1
stop_token_ids = [0]

print(f"Successfully loaded the model {model} into memory")
jingxu10 commented 1 year ago

pls share your env info. https://github.com/intel/intel-extension-for-pytorch/blob/master/scripts/collect_env.py

TheMrCodes commented 1 year ago

Here are the results:

(.venv) themrcodes@worker1:~/repo/python/intel-arc$ python collect_env.py
/home/themrcodes/repo/python/intel-arc/.venv/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension:
  warn(f"Failed to load image Python extension: {e}")
No CUDA runtime is found, using CUDA_HOME='/usr'
Collecting environment information...
PyTorch version: 1.13.0a0+git6c9b55e
PyTorch CXX11 ABI: Yes
IPEX version: 1.13.120+xpu
IPEX commit: c2a37012e
Build type: Release

OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
Clang version: N/A
IGC version: 2023.1.0 (2023.1.0.20230320)
CMake version: version 3.26.4
Libc version: glibc-2.35

Python version: 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-5.19.0-43-generic-x86_64-with-glibc2.35
Is XPU available: True
DPCPP runtime version: 2023.1.0
MKL version: 2023.1.0
GPU models and configuration:
[0] _DeviceProperties(name='Intel(R) Arc(TM) A770 Graphics', platform_name='Intel(R) Level-Zero', dev_type='gpu, support_fp64=0, total_memory=15473MB, max_compute_units=512)
Intel OpenCL ICD version: 23.13.26032.26-627~22.04
Level Zero version: 1.3.26032.26-627~22.04

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Address sizes:                   46 bits physical, 48 bits virtual
Byte Order:                      Little Endian
CPU(s):                          20
On-line CPU(s) list:             0-19
Vendor ID:                       GenuineIntel
Model name:                      13th Gen Intel(R) Core(TM) i5-13500T
CPU family:                      6
Model:                           191
Thread(s) per core:              2
Core(s) per socket:              14
Socket(s):                       1
Stepping:                        2
CPU max MHz:                     4600.0000
CPU min MHz:                     800.0000
BogoMIPS:                        3225.60
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization:                  VT-x
L1d cache:                       544 KiB (14 instances)
L1i cache:                       704 KiB (14 instances)
L2 cache:                        11.5 MiB (8 instances)
L3 cache:                        24 MiB (1 instance)
NUMA node(s):                    1
NUMA node0 CPU(s):               0-19
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Retbleed:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected

Versions of relevant libraries:
[pip3] intel-extension-for-pytorch==1.13.120+xpu
[pip3] numpy==1.25.0
[pip3] torch==1.13.0a0+git6c9b55e
[pip3] torchvision==0.14.1a0+5e8e2f1
[conda] N/A
jgong5 commented 1 year ago

From your code, this is a quantized 4bit model (model_basename = "gptq_model-4bit-128g")? I guess that's the cause since 4bit needs special kernels that are not available in IPEX yet. Have you tried fp32 or bf16 model?

TheMrCodes commented 1 year ago

Right... yes it is a 4bit quantitized model. The model full bf16 model will not fit in VRAM so I will have to try something else.

Is there the possiblility to detect the needed ops (probably linear_prepack) ando make a feature request? Its maybe a low priority but it is in the backlog then