ROCm / xformers

Hackable and optimized Transformers building blocks, supporting a composable construction.
https://facebookresearch.github.io/xformers/
Other
20 stars 7 forks source link

Crashes gpu (gfx900) during model loading when testing with comfyui #14

Open thenightterorx opened 4 months ago

thenightterorx commented 4 months ago

🐛 Bug

builds and installs without error for gfx900 (used docker image), but crashes entire gpu (screen goes black, etc) when trying to actually use

comfyui itself shows no error

dmesg output [ 2742.462312] [drm:amdgpu_job_timedout [amdgpu]] ERROR ring page1 timeout, signaled seq=2216, emitted seq=2218 [ 2742.462662] amdgpu 0000:2b:00.0: amdgpu: GPU reset begin! [ 2742.462693] amdgpu: Failed to suspend process 0x8007 [ 2742.520053] amdgpu 0000:2b:00.0: amdgpu: psp gfx command UNLOAD_TA(0x2) failed and response status is (0x117) [ 2742.548437] amdgpu 0000:2b:00.0: amdgpu: BACO reset [ 2744.090326] amdgpu 0000:2b:00.0: amdgpu: GPU reset succeeded, trying to resume [ 2744.090572] [drm] PCIE GART of 512M enabled. [ 2744.090574] [drm] PTB located at 0x000000F7FEF00000 [ 2744.090643] [drm] VRAM is lost due to GPU reset! [ 2744.090644] amdgpu 0000:2b:00.0: amdgpu: PSP is resuming... [ 2744.278354] amdgpu 0000:2b:00.0: amdgpu: reserve 0x400000 from 0xf7fe400000 for PSP TMR [ 2744.455003] [drm] kiq ring mec 2 pipe 1 q 0 [ 2744.476864] [drm] UVD and UVD ENC initialized successfully. [ 2744.577481] [drm] VCE initialized successfully. [ 2744.577492] amdgpu 0000:2b:00.0: amdgpu: ring gfx uses VM inv eng 0 on hub 0 [ 2744.577494] amdgpu 0000:2b:00.0: amdgpu: ring comp_1.0.0 uses VM inv eng 1 on hub 0 [ 2744.577496] amdgpu 0000:2b:00.0: amdgpu: ring comp_1.1.0 uses VM inv eng 4 on hub 0 [ 2744.577498] amdgpu 0000:2b:00.0: amdgpu: ring comp_1.2.0 uses VM inv eng 5 on hub 0 [ 2744.577500] amdgpu 0000:2b:00.0: amdgpu: ring comp_1.3.0 uses VM inv eng 6 on hub 0 [ 2744.577501] amdgpu 0000:2b:00.0: amdgpu: ring comp_1.0.1 uses VM inv eng 7 on hub 0 [ 2744.577503] amdgpu 0000:2b:00.0: amdgpu: ring comp_1.1.1 uses VM inv eng 8 on hub 0 [ 2744.577505] amdgpu 0000:2b:00.0: amdgpu: ring comp_1.2.1 uses VM inv eng 9 on hub 0 [ 2744.577506] amdgpu 0000:2b:00.0: amdgpu: ring comp_1.3.1 uses VM inv eng 10 on hub 0 [ 2744.577508] amdgpu 0000:2b:00.0: amdgpu: ring kiq_0.2.1.0 uses VM inv eng 11 on hub 0 [ 2744.577510] amdgpu 0000:2b:00.0: amdgpu: ring sdma0 uses VM inv eng 0 on hub 8 [ 2744.577512] amdgpu 0000:2b:00.0: amdgpu: ring page0 uses VM inv eng 1 on hub 8 [ 2744.577513] amdgpu 0000:2b:00.0: amdgpu: ring sdma1 uses VM inv eng 4 on hub 8 [ 2744.577515] amdgpu 0000:2b:00.0: amdgpu: ring page1 uses VM inv eng 5 on hub 8 [ 2744.577517] amdgpu 0000:2b:00.0: amdgpu: ring uvd_0 uses VM inv eng 6 on hub 8 [ 2744.577518] amdgpu 0000:2b:00.0: amdgpu: ring uvd_enc_0.0 uses VM inv eng 7 on hub 8 [ 2744.577520] amdgpu 0000:2b:00.0: amdgpu: ring uvd_enc_0.1 uses VM inv eng 8 on hub 8 [ 2744.577522] amdgpu 0000:2b:00.0: amdgpu: ring vce0 uses VM inv eng 9 on hub 8 [ 2744.577523] amdgpu 0000:2b:00.0: amdgpu: ring vce1 uses VM inv eng 10 on hub 8 [ 2744.577525] amdgpu 0000:2b:00.0: amdgpu: ring vce2 uses VM inv eng 11 on hub 8 [ 2744.579367] amdgpu 0000:2b:00.0: amdgpu: recover vram bo from shadow start [ 2744.580353] amdgpu 0000:2b:00.0: amdgpu: recover vram bo from shadow done [ 2744.580371] amdgpu 0000:2b:00.0: amdgpu: GPU reset(3) succeeded!

Expected behavior

I expected it to work

Environment

Please copy and paste the output from the environment collection script from PyTorch (or fill out the checklist below manually).

You can run the script with:

PyTorch version: 2.3.1+rocm6.0 Is debug build: False CUDA used to build PyTorch: N/A ROCM used to build PyTorch: 6.0.32830-d62f6a171

OS: Arch Linux (x86_64) GCC version: (GCC) 14.1.1 20240522 Clang version: 17.0.6 CMake version: version 3.29.3 Libc version: glibc-2.39

Python version: 3.10.13 (main, Oct 17 2023, 22:22:30) [GCC 13.2.1 20230801] (64-bit runtime) Python platform: Linux-6.9.3-arch1-1-x86_64-with-glibc2.39 Is CUDA available: True CUDA runtime version: 12.5.40 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: AMD Radeon Pro WX 9100 (gfx900:xnack-) Nvidia driver version: Could not collect cuDNN version: Probably one of the following: /usr/lib/libcudnn.so.9.1.1 /usr/lib/libcudnn_adv.so.9.1.1 /usr/lib/libcudnn_cnn.so.9.1.1 /usr/lib/libcudnn_engines_precompiled.so.9.1.1 /usr/lib/libcudnn_engines_runtime_compiled.so.9.1.1 /usr/lib/libcudnn_graph.so.9.1.1 /usr/lib/libcudnn_heuristic.so.9.1.1 /usr/lib/libcudnn_ops.so.9.1.1 HIP runtime version: 6.0.32830 MIOpen runtime version: 3.0.0 Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 43 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Vendor ID: AuthenticAMD Model name: AMD Ryzen 9 3900X 12-Core Processor CPU family: 23 Model: 113 Thread(s) per core: 2 Core(s) per socket: 12 Socket(s): 1 Stepping: 0 Frequency boost: enabled CPU(s) scaling MHz: 57% CPU max MHz: 4672.0698 CPU min MHz: 2200.0000 BogoMIPS: 7588.69 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es Virtualization: AMD-V L1d cache: 384 KiB (12 instances) L1i cache: 384 KiB (12 instances) L2 cache: 6 MiB (12 instances) L3 cache: 64 MiB (4 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-23 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec rstack overflow: Mitigation; Safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected

Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] onnx==1.16.1 [pip3] onnxruntime==1.18.0 [pip3] onnxruntime-gpu==1.18.0 [pip3] open-clip-torch==2.24.0 [pip3] pytorch-lightning==2.2.5 [pip3] pytorch-triton-rocm==2.3.1 [pip3] torch==2.3.1+rocm6.0 [pip3] torchaudio==2.3.1+rocm6.0 [pip3] torchdiffeq==0.2.4 [pip3] torchmetrics==1.4.0.post0 [pip3] torchsde==0.2.6 [pip3] torchvision==0.18.1+rocm6.0 [pip3] triton==2.3.1 [conda] Could not collect

Additional context

If it means anything, my built xformers wheel is only 635 kilobytes, which is smaller than the prebuilt wheels. IDK if this means something is broken in it.

thenightterorx commented 4 months ago

Nevermind, I get the same errors as everyone else when I build natively (I had to delete cuda as it kept breaking things) with hip installed.

thenightterorx commented 4 months ago

error is "needs target feature mai-insts"

qianfengz commented 3 months ago

The Gemm MFMA instructions used for implementing FMHA (attention) can only be available on MI200 (gfx90a) and MI 300 (gfx940 series)