pytorch / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
82.68k stars 22.27k forks source link

PyTorch crashes when moving tensor to GPU on Python 3.10.13 pytorch 2.1.2 + rocm5.6 (Radeon RX 6650 XT) #116531

Closed seadesert closed 9 months ago

seadesert commented 9 months ago

🐛 Describe the bug

I am trying to move tensor to GPU in PyTorch 2.1.2 on Python 3.10.13 and pytorch 2.1.2 + rocm5.6 (Radeon RX 6650 XT)

Have a similar issue as: https://github.com/pytorch/pytorch/issues/111355 but the solution/workaround does not work.

Repro steps: On Arch Linux (Kernel: 6.6.8-arch1-1): pacman -S rocm-opencl-runtime Install python310 from AUR paru -S python310 Create Python 3.10.13 virtual env python3.10 -m venv rocm-env Activate the virtual env source ~/rocm-env/bin/activate Install PyTorch 2.1.2 on virtual env pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6 Use workaround from [pytorch/issues/111355]: export HSA_OVERRIDE_GFX_VERSION=10.3.0

Run the code: gpu_test.py

import torch
t1 = torch.randn(1,2).to(dev)
print(t1)

Get the error message on python3.10 gpu_test.py

Traceback (most recent call last):
  File "/home/usert/gpu_test.py", line 7, in <module>
    print(t2)  # tensor([[ 0.5117, -3.6247]], device='cuda:0')
  File "/home/usert/rocm-env/lib/python3.10/site-packages/torch/_tensor.py", line 431, in __repr__
    return torch._tensor_str._str(self, tensor_contents=tensor_contents)
  File "/home/usert/rocm-env/lib/python3.10/site-packages/torch/_tensor_str.py", line 664, in _str
    return _str_intern(self, tensor_contents=tensor_contents)
  File "/home/usert/rocm-env/lib/python3.10/site-packages/torch/_tensor_str.py", line 595, in _str_intern
    tensor_str = _tensor_str(self, indent)
  File "/home/usert/rocm-env/lib/python3.10/site-packages/torch/_tensor_str.py", line 347, in _tensor_str
    formatter = _Formatter(get_summarized_data(self) if summarize else self)
  File "/home/usert/rocm-env/lib/python3.10/site-packages/torch/_tensor_str.py", line 137, in __init__
    nonzero_finite_vals = torch.masked_select(
RuntimeError: HIP error: the operation cannot be performed in the present state
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing HIP_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.

Versions

(rocm-env) HSA_OVERRIDE_GFX_VERSION=10.3.0 python3.10 collect_env.py                                           
Collecting environment information...
Traceback (most recent call last):
  File "/home/usert/collect_env.py", line 616, in <module>
    main()
  File "/home/usert/collect_env.py", line 599, in main
    output = get_pretty_env_info()
  File "/home/usert/collect_env.py", line 594, in get_pretty_env_info
    return pretty_str(get_env_info())
  File "/home/usert/collect_env.py", line 466, in get_env_info
    nvidia_gpu_models=get_gpu_info(run_lambda),
  File "/home/usert/collect_env.py", line 151, in get_gpu_info
    (" ({})".format(torch.cuda.get_device_properties(0).gcnArchName) if torch.version.hip is not None else "")
AttributeError: 'torch._C._CudaDeviceProperties' object has no attribute 'gcnArchName'

cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang

seadesert commented 9 months ago

Versions

Collecting environment information...
PyTorch version: 2.1.2+rocm5.6
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 5.6.31061-8c743ae5d

OS: ArcoLinux (x86_64)
GCC version: (GCC) 13.2.1 20230801
Clang version: 16.0.6
CMake version: Could not collect
Libc version: glibc-2.38

Python version: 3.10.13 (main, Dec 28 2023, 17:41:20) [GCC 13.2.1 20230801] (64-bit runtime)
Python platform: Linux-6.6.8-arch1-1-x86_64-with-glibc2.38
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Radeon RX 6650 XTNoGCNArchNameOnOldPyTorch
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 5.6.31061
MIOpen runtime version: 2.20.0
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      39 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             12
On-line CPU(s) list:                0-11
Vendor ID:                          GenuineIntel
Model name:                         11th Gen Intel(R) Core(TM) i5-11400F @ 2.60GHz
CPU family:                         6
Model:                              167
Thread(s) per core:                 2
Core(s) per socket:                 6
Socket(s):                          1
Stepping:                           1
CPU(s) scaling MHz:                 83%
CPU max MHz:                        4400.0000
CPU min MHz:                        800.0000
BogoMIPS:                           5186.00
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap avx512ifma clflushopt intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear flush_l1d arch_capabilities
Virtualization:                     VT-x
L1d cache:                          288 KiB (6 instances)
L1i cache:                          192 KiB (6 instances)
L2 cache:                           3 MiB (6 instances)
L3 cache:                           12 MiB (1 instance)
NUMA node(s):                       1
NUMA node0 CPU(s):                  0-11
Vulnerability Gather data sampling: Vulnerable: No microcode
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Retbleed:             Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected

Versions of relevant libraries:
[pip3] numpy==1.24.1
[pip3] pytorch-triton-rocm==2.1.0
[pip3] torch==2.1.2+rocm5.6
[pip3] torchaudio==2.1.2+rocm5.6
[pip3] torchvision==0.16.2+rocm5.6
[conda] Could not collect
hongxiayang commented 9 months ago

Can you try install pytorch using

pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7

and then using below env to run your scripts:

export HSA_OVERRIDE_GFX_VERSION=10.3.0
seadesert commented 9 months ago

I have tried the suggested steps and did not face any issues on torch==2.3.0.dev20240102+rocm5.7.

hongxiayang commented 9 months ago

I have tried the suggested steps and did not face any issues on torch==2.3.0.dev20240102+rocm5.7.

@seadesert Thanks for verifying and confirmation. You can feel free to close this issue. The reason that the latest wheel works for you is that we had a fix there for the problem you faced, which related to PCIe atomics.

seadesert commented 9 months ago

Thanks a lot!