intel / intel-extension-for-pytorch

A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
Apache License 2.0
1.56k stars 237 forks source link

Inference with inputs with sequence length increased will cause OOM #489

Open Ricky-Ting opened 9 months ago

Ricky-Ting commented 9 months ago

Describe the bug

When seq_len get larger and larger, the device memory utilization will get higher and finally gets OOM on Arc770. But if you simply run once inference with seq_len=1300, the device mem occupied should be around 15.2 GB.

related code
```python import torch import intel_extension_for_pytorch as ipex from transformers import AutoModelForCausalLM pretrained = "/mnt/disk1/models/Llama-2-7b-chat-hf/" device = 'xpu' model = AutoModelForCausalLM.from_pretrained(pretrained, trust_remote_code=True, use_cache=True) model = model.half().to(device) with torch.inference_mode(): for seq_len in range(1200, 1301, 5): input_ids = torch.randint(5, 2000, (1, seq_len)).to(device) attention_mask = torch.ones_like(input_ids).to(device) print(input_ids.shape) generations = model.generate( input_ids=input_ids, attention_mask=attention_mask, max_new_tokens=256, do_sample=False, ) torch.xpu.synchronize() ```
Outputs
``` torch.Size([1, 1205]) torch.Size([1, 1210]) torch.Size([1, 1215]) torch.Size([1, 1220]) torch.Size([1, 1225]) torch.Size([1, 1230]) torch.Size([1, 1235]) torch.Size([1, 1240]) torch.Size([1, 1245]) torch.Size([1, 1250]) torch.Size([1, 1255]) Traceback (most recent call last): File "/home/arda/baorong/tmp/generate.py", line 21, in generations = model.generate( File "/opt/anaconda3/envs/baorong-accuracy/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/opt/anaconda3/envs/baorong-accuracy/lib/python3.9/site-packages/transformers/generation/utils.py", line 1538, in generate return self.greedy_search( File "/opt/anaconda3/envs/baorong-accuracy/lib/python3.9/site-packages/transformers/generation/utils.py", line 2362, in greedy_search outputs = self( File "/opt/anaconda3/envs/baorong-accuracy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/anaconda3/envs/baorong-accuracy/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 806, in forward outputs = self.model( File "/opt/anaconda3/envs/baorong-accuracy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/anaconda3/envs/baorong-accuracy/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 693, in forward layer_outputs = decoder_layer( File "/opt/anaconda3/envs/baorong-accuracy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/anaconda3/envs/baorong-accuracy/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 408, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/opt/anaconda3/envs/baorong-accuracy/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/anaconda3/envs/baorong-accuracy/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 346, in forward attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype) File "/opt/anaconda3/envs/baorong-accuracy/lib/python3.9/site-packages/torch/nn/functional.py", line 1845, in softmax ret = input.softmax(dim, dtype=dtype) RuntimeError: Allocation is out of device memory on current platform. ```

Versions

Collecting environment information...
PyTorch version: 2.0.1a0+cxx11.abi
PyTorch CXX11 ABI: Yes
IPEX version: 2.0.110+xpu
IPEX commit: ba7f6c127
Build type: Release

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: N/A
IGC version: 2023.2.0 (2023.2.0.20230622)
CMake version: N/A
Libc version: glibc-2.35

Python version: 3.9.18 (main, Sep 11 2023, 13:41:44)  [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
Is XPU available: True
DPCPP runtime version: 2023.2.0
MKL version: 2023.2.0
GPU models and configuration: 
[0] _DeviceProperties(name='Intel(R) Arc(TM) A770 Graphics', platform_name='Intel(R) Level-Zero', dev_type='gpu, support_fp64=0, total_memory=15473MB, max_compute_units=512, gpu_eu_count=512)
[1] _DeviceProperties(name='Intel(R) Arc(TM) A770 Graphics', platform_name='Intel(R) Level-Zero', dev_type='gpu, support_fp64=0, total_memory=15473MB, max_compute_units=512, gpu_eu_count=512)
[2] _DeviceProperties(name='Intel(R) UHD Graphics 770', platform_name='Intel(R) Level-Zero', dev_type='gpu, support_fp64=0, total_memory=51261MB, max_compute_units=32, gpu_eu_count=32)
Intel OpenCL ICD version: 23.17.26241.33-647~22.04
Level Zero version: 1.3.26241.33-647~22.04

CPU:
架构:                           x86_64
CPU 运行模式:                   32-bit, 64-bit
Address sizes:                   46 bits physical, 48 bits virtual
字节序:                         Little Endian
CPU:                             32
在线 CPU 列表:                  0-31
厂商 ID:                        GenuineIntel
型号名称:                       Intel(R) Core(TM) i9-14900K
CPU 系列:                       6
型号:                           183
每个核的线程数:                 2
每个座的核数:                   24
座:                             1
步进:                           1
CPU 最大 MHz:                   6000.0000
CPU 最小 MHz:                   800.0000
BogoMIPS:                       6374.40
标记:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
虚拟化:                         VT-x
L1d 缓存:                       896 KiB (24 instances)
L1i 缓存:                       1.3 MiB (24 instances)
L2 缓存:                        32 MiB (12 instances)
L3 缓存:                        36 MiB (1 instance)
NUMA 节点:                      1
NUMA 节点0 CPU:                 0-31
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Retbleed:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected

Versions of relevant libraries:
[pip3] intel-extension-for-pytorch==2.0.110+xpu
[pip3] numpy==1.26.2
[pip3] torch==2.0.1a0+cxx11.abi
[pip3] torchvision==0.15.2a0+cxx11.abi
[conda] intel-extension-for-pytorch 2.0.110+xpu              pypi_0    pypi
[conda] numpy                     1.26.2                   pypi_0    pypi
[conda] torch                     2.0.1a0+cxx11.abi          pypi_0    pypi
[conda] torchvision               0.15.2a0+cxx11.abi          pypi_0    pypi
kta-intel commented 7 months ago

Thanks, let me reproduce and get back to you

kta-intel commented 5 months ago

really sorry for the delay. I was able to reproduce your issue. not sure what is causing the accumulation, but you can empty the xpu cache between iterations to avoid OOM torch.xpu.empty_cache()