abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
7.98k stars 949 forks source link

2.27 and below Cannot Build or Install llama_cpp_python on AMD ROCm HIP POP OS 22.04 CMake Build Failed #1066

Open ganakee opened 9 months ago

ganakee commented 9 months ago

Expected Behavior

The llama-python-cpp should update and build.

Current Behavior

I cannot build except 1.59 (which I just tried due to a few suggestions from a similar apparent bug in 1.60) .I tried 2.26 to 2.10 manually, one-at-a-time and none build. All fail at step 9 of 23.

I use, for example, install/update for Llama.cpp Python. CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python==0.2.17 --upgrade --force-reinstall --no-cache-dir

I have tried several versions including the current. All Fail on task 9 0f 23

[9/23] /usr/bin/c++ -DGGML_USE_CUBLAS -DGGML_USE_HIPBLAS -DLLAMA_BUILD -DLLAMA_SHARED -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -D__HIP_PLATFORM_AMD__=1 -D__HIP_PLATFORM_HCC__=1 -Dllama_EXPORTS -I/tmp/pip-install-sbxror78/llama-cpp-python_fd8adbe6d6ce4076a81f5669a940df3d/vendor/llama.cpp/. -isystem /opt/rocm/include -isystem /opt/rocm-5.7.0/include -O3 -DNDEBUG -std=gnu++11 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-array-bounds -Wno-format-truncation -Wextra-semi -march=native -MD -MT vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -MF vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o.d -o vendor/llama.cpp/CMakeFiles/llama.dir/llama.cpp.o -c /tmp/pip-install-sbxror78/llama-cpp-python_fd8adbe6d6ce4076a81f5669a940df3d/vendor/llama.cpp/llama.cpp
      ninja: build stopped: subcommand failed.

      *** CMake build failed
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python

Environment and Context

I have a AMD Radeon 6650M GPU with the freshly installed AMD 5.7.00.48.50700 drivers with AMD usecases: graphics,rocm. amdgpu-install_5.7.00.48.50700-1_all.deb The system is POP OS 22.04 with all patches and updates to 2024-01-05.

`lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 16 On-line CPU(s) list: 0-15 Vendor ID: AuthenticAMD Model name: AMD Ryzen 7 6800H with Radeon Graphics CPU family: 25 Model: 68 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 1 Stepping: 1 CPU max MHz: 4785.0000 CPU min MHz: 400.0000 BogoMIPS: 6388.01 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mc a cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall n x mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_go od nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl p ni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2api c movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_le gacy svm extapic cr8_legacy abm sse4a misalignsse 3dnow prefetch osvw ibs skinit wdt tce topoext perfctr_core p erfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw _pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap c lflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cq m_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_sh stk clzero irperf xsaveerptr rdpru wbnoinvd cppc arat n pt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushby asid decodeassists pausefilter pfthreshold avic v_vmsav e_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqd q rdpid overflow_recov succor smca fsrm debug_swap Virtualization features: Virtualization: AMD-V Caches (sum of all):
L1d: 256 KiB (8 instances) L1i: 256 KiB (8 instances) L2: 4 MiB (8 instances) L3: 16 MiB (1 instance) NUMA:
NUMA node(s): 1 NUMA node0 CPU(s): 0-15 Vulnerabilities:
Gather data sampling: Not affected Itlb multihit: Not affected L1tf: Not affected Mds: Not affected Meltdown: Not affected Mmio stale data: Not affected Retbleed: Not affected Spec rstack overflow: Vulnerable: Safe RET, no microcode Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIB P always-on, RSB filling, PBRSB-eIBRS Not affected Srbds: Not affected Tsx async abort: Not affected

`

uname -a Linux pop-os 6.6.6-76060606-generic #202312111032~1702306143~22.04~d28ffec SMP PREEMPT_DYNAMIC Mon D x86_64 x86_64 x86_64 GNU/Linux

$ python3 --version  Python 3.10.12

$ make --version
GNU Make 4.3
Built for x86_64-pc-linux-gnu

$ g++ --version
g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0

Failure Information (for bugs)

Steps to Reproduce

Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.

  1. CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python --upgrade --force-reinstall --no-cache-dir This failed with the error above. Step 9.
  2. CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python==0.1.59 --upgrade --force-reinstall --no-cache-dir This builds BUT cannot use any current models in GGUF format due to incompatibility. I only tried this per another report of a similar issue with 1.60.
  3. Tried backing-off versions from 2.26 to 2.10 using variations CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python==0.2.xx --upgrade --force-reinstall --no-cache-dir This failed with the error above. Step 9.

Try the following (FAILS):

  1. git clone https://github.com/abetlen/llama-cpp-python
  2. cd llama-cpp-python
  3. rm -rf _skbuild/ # delete any old builds
  4. python -m pip install . Generates error ` python -m pip install . Defaulting to user installation because normal site-packages is not writeable Processing /media/wind/llama-cpp-python Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: typing-extensions>=4.5.0 in /home/shannon/.local/lib/python3.10/site-packages (from llama_cpp_python==0.2.27) (4.9.0) Requirement already satisfied: diskcache>=5.6.1 in /home/shannon/.local/lib/python3.10/site-packages (from llama_cpp_python==0.2.27) (5.6.3) Requirement already satisfied: numpy>=1.20.0 in /home/shannon/.local/lib/python3.10/site-packages (from llama_cpp_python==0.2.27) (1.26.3) Building wheels for collected packages: llama_cpp_python Building wheel for llama_cpp_python (pyproject.toml) ... error error: subprocess-exited-with-error

    × Building wheel for llama_cpp_python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [52 lines of output] scikit-build-core 0.7.1 using CMake 3.28.1 (wheel) Configuring CMake... loading initial cache file /tmp/tmp4k5uypsv/build/CMakeInit.txt -- The C compiler identification is GNU 11.4.0 -- The CXX compiler identification is GNU 11.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done CMake Error at CMakeLists.txt:20 (add_subdirectory): The source directory

      /media/wind/llama-cpp-python/vendor/llama.cpp
    
    does not contain a CMakeLists.txt file.

    CMake Error at CMakeLists.txt:21 (install): install TARGETS given target "llama" which does not exist.

    CMake Error at CMakeLists.txt:30 (install): install TARGETS given target "llama" which does not exist.

    CMake Error at CMakeLists.txt:50 (add_subdirectory): add_subdirectory given source "vendor/llama.cpp/examples/llava" which is not an existing directory.

    CMake Error at CMakeLists.txt:51 (set_target_properties): set_target_properties Can not find target to add properties to: llava_shared

    CMake Error at CMakeLists.txt:56 (install): install TARGETS given target "llava_shared" which does not exist.

    CMake Error at CMakeLists.txt:65 (install): install TARGETS given target "llava_shared" which does not exist.

    -- Configuring incomplete, errors occurred!

    *** CMake configuration failed [end of output]

    note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama_cpp_python Failed to build llama_cpp_python ERROR: Could not build wheels for llama_cpp_python, which is required to install pyproject.toml-based projects

`

  1. cd ./vendor/llama.cpp

  2. Follow llama.cpp's instructions to cmake llama.cpp I did this and complied using make LLAMA_HIPBLAS=1. The first run received a math missing include error in CPP. I installed libstdc++-12-ddev and this error ewent away.

  3. Run llama.cpp's ./main with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cpp

I am not sure how to do this. If I run ./make with various arguments I get a help showing some syntax error.

ganakee commented 9 months ago

I did more research on this issue. I also updated to ROCm 6.0. amdgpu-install_6.0.00.48.50700-1_all.deb The problem still occurs. I can compile with the opencl flags but not with the HIP flags.

d1ggs commented 9 months ago

Same issue here, I can correctly compile llama.cpp in the submodule but I can't get the package compilation to work.

kkkkkkjd commented 9 months ago

I have the same problem

NonaSuomy commented 9 months ago

This worked for me:

CMAKE_ARGS="-DLLAMA_HIPBLAS=ON -DCMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang -DCMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++ -DCMAKE_PREFIX_PATH=/opt/rocm -DAMDGPU_TARGETS=gfx900" FORCE_CMAKE=1 pip install llama-cpp-python==0.2.29
ganakee commented 9 months ago

Thank you @NonaSuomy !!!!!!

The technique (with some mods) noted by @NonaSuomy worked on my system (AMD RX 6650M).

Long Discussion

I used CMAKE_ARGS="-DLLAMA_HIPBLAS=ON -DCMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang -DCMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++ -DCMAKE_PREFIX_PATH=/opt/rocm -DAMDGPU_TARGETS=gfx1030" FORCE_CMAKE=1 pip install llama-cpp-python==0.2.29 --upgrade --force-reinstall --no-cache-dir

This used the gfx1030 instruction set (rather than gfx900). My RX 6650M uses the 1032 technically. These numbers means the version: 10.3.0.

Careful reading of TechPowerup specs on the RX 6650M card at TechPowerup helped to better know the card.

I ran the above (as modified) as suggested by @NonaSuomy (again, thank you!). llama-cpp-python compiled successfully.

However, I still could not get a simple test script to run. I received the following error:

/bin/python /media/wind/Temp/testmodelswin.py

rocBLAS error: Cannot read /opt/rocm-6.0.0/lib/llvm/bin/../../../lib/rocblas/library/TensileLibrary.dat: No such file or directory for GPU arch : gfx1032
 List of available TensileLibrary Files : 
"/opt/rocm-6.0.0/lib/llvm/bin/../../../lib/rocblas/library/TensileLibrary_lazy_gfx1030.dat"
"/opt/rocm-6.0.0/lib/llvm/bin/../../../lib/rocblas/library/TensileLibrary_lazy_gfx90a.dat"
"/opt/rocm-6.0.0/lib/llvm/bin/../../../lib/rocblas/library/TensileLibrary_lazy_gfx1100.dat"
"/opt/rocm-6.0.0/lib/llvm/bin/../../../lib/rocblas/library/TensileLibrary_lazy_gfx1101.dat"
"/opt/rocm-6.0.0/lib/llvm/bin/../../../lib/rocblas/library/TensileLibrary_lazy_gfx940.dat"
"/opt/rocm-6.0.0/lib/llvm/bin/../../../lib/rocblas/library/TensileLibrary_lazy_gfx900.dat"
"/opt/rocm-6.0.0/lib/llvm/bin/../../../lib/rocblas/library/TensileLibrary_lazy_gfx942.dat"
"/opt/rocm-6.0.0/lib/llvm/bin/../../../lib/rocblas/library/TensileLibrary_lazy_gfx908.dat"
"/opt/rocm-6.0.0/lib/llvm/bin/../../../lib/rocblas/library/TensileLibrary_lazy_gfx941.dat"
"/opt/rocm-6.0.0/lib/llvm/bin/../../../lib/rocblas/library/TensileLibrary_lazy_gfx1102.dat"
"/opt/rocm-6.0.0/lib/llvm/bin/../../../lib/rocblas/library/TensileLibrary_lazy_gfx906.dat"
Aborted (core dumped)

Override HSA_OVERRIDE_GFX_VERSION

OK. llama-cpp-python was compiling but could not be used because technically the AMD RX 6650M is not supported. The card reports the 10.3.2 instruction set and ROCm (currently) only supports 10.3.0. (ROCm 6.0). AMD seems to be extremely persnickety about versions.

After more research, the issue here is overriding the apparent version by using something like: HSA_OVERRIDE_GFX_VERSION=10.3.0 /bin/python /media/wind/Temp/testmodelswin.py

Bingo. The sample script ran (edited result) and note that the model report shows off_loaded to GPU!!!!!

HSA_OVERRIDE_GFX_VERSION=10.3.0 /bin/python /media/wind/Temp/testmodelswin.py
ggml_init_cublas: GGML_CUDA_FORCE_MMQ:   no
ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 2 ROCm devices:
  Device 0: AMD Radeon RX 6650M, compute capability 10.3, VMM: no
  Device 1: AMD Radeon Graphics, compute capability 10.3, VMM: no
llama_model_loader: loaded meta data with 20 key-value pairs and 325 tensors from /media/wind//Downloads/Models-Hugging-Face-LLAMACCP-GGUF/phi-2.Q4_K_M.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
{removed}
llama_model_loader: - type  f32:  195 tensors
llama_model_loader: - type q4_K:   81 tensors
llama_model_loader: - type q5_K:   32 tensors
llama_model_loader: - type q6_K:   17 tensors
{removed}
llm_load_tensors: offloading 5 repeating layers to GPU
llm_load_tensors: offloaded 5/33 layers to GPU
llm_load_tensors:      ROCm0 buffer size =   242.53 MiB
llm_load_tensors:        CPU buffer size =  1602.48 MiB
...........................................................................................
{removed}
llama_new_context_with_model: graph splits (measure): 13
llama_new_context_with_model:      ROCm0 compute buffer size =    69.50 MiB
llama_new_context_with_model:      ROCm1 compute buffer size =     0.00 MiB
llama_new_context_with_model:  ROCm_Host compute buffer size =   105.00 MiB
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | 

Permanently Override HSA_OVERRIDE_GFX_VERSION

To get the HSA_OVERRIDE_GFX_VERSION to persist, export this variable using ~/.bashrc

gedit ~/.bashrc

scroll to the bottom and add

# Used for ROCM LLM 
export HSA_OVERRIDE_GFX_VERSION=10.3.0

Save the .bashrc file and then run:

source ~/.bashrc
source ~/.profile

These commends re-load the variables from .bashrc. You can also just logout or reboot.

You can test from terminal by running: echo $HSA_OVERRIDE_GFX_VERSION and you should see 10.3.0

Snippet from Test Script

If this helps others, my test script uses a VERY simple model load. The RX6650M is a discrete card but the laptop also has a CPU video card (see rocminfo below). Thus, the test script MUST use the main_gpu flag. I use main_gpu=1. Also, I use n_gpu_layers=5 here which offloads 5 tensors. See the llama.cpp documentation


llamacpp_model = Llama(model_path=my_model_path,
                    n_ctx=512, 
                    n_gpu_layers=5,
                    main_gpu=1,
                    verbose=True)

ROCMINFO

rocminfo
ROCk module is loaded
=====================    
HSA System Attributes    
=====================    
Runtime Version:         1.1
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE                              
System Endianness:       LITTLE                             
Mwaitx:                  DISABLED
DMAbuf Support:          YES

==========               
HSA Agents               
==========               
*******                  
Agent 1                  
*******                  
  Name:                    AMD Ryzen 7 6800H with Radeon Graphics
  Uuid:                    CPU-XX                             
  Marketing Name:          AMD Ryzen 7 6800H with Radeon Graphics
  Vendor Name:             CPU                                
  Feature:                 None specified                     
  Profile:                 FULL_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        0(0x0)                             
  Queue Min Size:          0(0x0)                             
  Queue Max Size:          0(0x0)                             
  Queue Type:              MULTI                              
  Node:                    0                                  
  Device Type:             CPU                                
  Cache Info:              
    L1:                      32768(0x8000) KB                   
  Chip ID:                 0(0x0)                             
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   4785                               
  BDFID:                   0                                  
  Internal Node ID:        0                                  
  Compute Unit:            16                                 
  SIMDs per CU:            0                                  
  Shader Engines:          0                                  
  Shader Arrs. per Eng.:   0                                  
  WatchPts on Addr. Ranges:1                                  
  Features:                None
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: FINE GRAINED        
      Size:                    32044236(0x1e8f4cc) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: KERNARG, FINE GRAINED
      Size:                    32044236(0x1e8f4cc) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
    Pool 3                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    32044236(0x1e8f4cc) KB             
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       TRUE                               
  ISA Info:                
*******                  
Agent 2                  
*******                  
  Name:                    gfx1032                            
  Uuid:                    GPU-XX                             
  Marketing Name:          AMD Radeon RX 6650M                
  Vendor Name:             AMD                                
  Feature:                 KERNEL_DISPATCH                    
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        128(0x80)                          
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          131072(0x20000)                    
  Queue Type:              MULTI                              
  Node:                    1                                  
  Device Type:             GPU                                
  Cache Info:              
    L1:                      16(0x10) KB                        
    L2:                      2048(0x800) KB                     
    L3:                      32768(0x8000) KB                   
  Chip ID:                 29679(0x73ef)                      
  ASIC Revision:           0(0x0)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   2625                               
  BDFID:                   768                                
  Internal Node ID:        1                                  
  Compute Unit:            28                                 
  SIMDs per CU:            2                                  
  Shader Engines:          2                                  
  Shader Arrs. per Eng.:   2                                  
  WatchPts on Addr. Ranges:4                                  
  Coherent Host Access:    FALSE                              
  Features:                KERNEL_DISPATCH 
  Fast F16 Operation:      TRUE                               
  Wavefront Size:          32(0x20)                           
  Workgroup Max Size:      1024(0x400)                        
  Workgroup Max Size per Dimension:
    x                        1024(0x400)                        
    y                        1024(0x400)                        
    z                        1024(0x400)                        
  Max Waves Per CU:        32(0x20)                           
  Max Work-item Per CU:    1024(0x400)                        
  Grid Max Size:           4294967295(0xffffffff)             
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)             
    y                        4294967295(0xffffffff)             
    z                        4294967295(0xffffffff)             
  Max fbarriers/Workgrp:   32                                 
  Packet Processor uCode:: 109                                
  SDMA engine uCode::      76                                 
  IOMMU Support::          None                               
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    8372224(0x7fc000) KB               
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    8372224(0x7fc000) KB               
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 3                   
      Segment:                 GROUP                              
      Size:                    64(0x40) KB                        
      Allocatable:             FALSE                              
      Alloc Granule:           0KB                                
      Alloc Alignment:         0KB                                
      Accessible by all:       FALSE                              
  ISA Info:                
    ISA 1                    
      Name:                    amdgcn-amd-amdhsa--gfx1032         
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
*******                  
Agent 3                  
*******                  
  Name:                    gfx1035                            
  Uuid:                    GPU-XX                             
  Marketing Name:          AMD Radeon Graphics                
  Vendor Name:             AMD                                
  Feature:                 KERNEL_DISPATCH                    
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        128(0x80)                          
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          131072(0x20000)                    
  Queue Type:              MULTI                              
  Node:                    2                                  
  Device Type:             GPU                                
  Cache Info:              
    L1:                      16(0x10) KB                        
    L2:                      2048(0x800) KB                     
  Chip ID:                 5761(0x1681)                       
  ASIC Revision:           2(0x2)                             
  Cacheline Size:          64(0x40)                           
  Max Clock Freq. (MHz):   2200                               
  BDFID:                   2304                               
  Internal Node ID:        2                                  
  Compute Unit:            12                                 
  SIMDs per CU:            2                                  
  Shader Engines:          1                                  
  Shader Arrs. per Eng.:   2                                  
  WatchPts on Addr. Ranges:4                                  
  Coherent Host Access:    FALSE                              
  Features:                KERNEL_DISPATCH 
  Fast F16 Operation:      TRUE                               
  Wavefront Size:          32(0x20)                           
  Workgroup Max Size:      1024(0x400)                        
  Workgroup Max Size per Dimension:
    x                        1024(0x400)                        
    y                        1024(0x400)                        
    z                        1024(0x400)                        
  Max Waves Per CU:        32(0x20)                           
  Max Work-item Per CU:    1024(0x400)                        
  Grid Max Size:           4294967295(0xffffffff)             
  Grid Max Size per Dimension:
    x                        4294967295(0xffffffff)             
    y                        4294967295(0xffffffff)             
    z                        4294967295(0xffffffff)             
  Max fbarriers/Workgrp:   32                                 
  Packet Processor uCode:: 113                                
  SDMA engine uCode::      37                                 
  IOMMU Support::          None                               
  Pool Info:               
    Pool 1                   
      Segment:                 GLOBAL; FLAGS: COARSE GRAINED      
      Size:                    524288(0x80000) KB                 
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 2                   
      Segment:                 GLOBAL; FLAGS: EXTENDED FINE GRAINED
      Size:                    524288(0x80000) KB                 
      Allocatable:             TRUE                               
      Alloc Granule:           4KB                                
      Alloc Alignment:         4KB                                
      Accessible by all:       FALSE                              
    Pool 3                   
      Segment:                 GROUP                              
      Size:                    64(0x40) KB                        
      Allocatable:             FALSE                              
      Alloc Granule:           0KB                                
      Alloc Alignment:         0KB                                
      Accessible by all:       FALSE                              
  ISA Info:                
    ISA 1                    
      Name:                    amdgcn-amd-amdhsa--gfx1035         
      Machine Models:          HSA_MACHINE_MODEL_LARGE            
      Profiles:                HSA_PROFILE_BASE                   
      Default Rounding Mode:   NEAR                               
      Default Rounding Mode:   NEAR                               
      Fast f16:                TRUE                               
      Workgroup Max Size:      1024(0x400)                        
      Workgroup Max Size per Dimension:
        x                        1024(0x400)                        
        y                        1024(0x400)                        
        z                        1024(0x400)                        
      Grid Max Size:           4294967295(0xffffffff)             
      Grid Max Size per Dimension:
        x                        4294967295(0xffffffff)             
        y                        4294967295(0xffffffff)             
        z                        4294967295(0xffffffff)             
      FBarrier Max Size:       32                                 
*** Done ***             
Gaozizhong commented 8 months ago

gcc 版本需要升级到 11 版本,升级好就可以正常安装了

kkkkkkjd commented 8 months ago

gcc 版本需要升级到 11 版本,升级好就可以正常安装了

哦? 我回去试一下