ggerganov / llama.cpp

LLM inference in C/C++
MIT License
67.21k stars 9.64k forks source link

[Issue] Prompt Size Issue when using CLBLAST | GGML_ASSERT: ggml.c:11270: ne02 == ne12 Aborted (core dumped) #3263

Closed akumaburn closed 1 year ago

akumaburn commented 1 year ago

Prerequisites

Please answer the following questions for yourself before submitting an issue.

Expected Behavior

llama.cpp compiled with: make LLAMA_CLBLAST=ON

Should support prompt lengths larger than 116 (114 not counting the quotes) characters.

Works with the following:

/home/user/Desktop/Projects/llama.cpp/main --interactive --mlock --ctx_size 4096 --temp 0.239 --top_k 200 --top_p 0.945 --repeat_last_n 512 --batch_size 4096 --repeat_penalty 1.0 --keep -1 --model /home/user/Desktop/Projects/LLaMA/wizardlm-1.0-uncensored-codellama-34b.Q5_K_M.gguf --threads 16 --n_predict 4096 --reverse-prompt User: --n-gpu-layers 16 --prompt "A transcript of a dialog, where the User interacts with his servant named Mia. Mia is an expert in all subjects.12"

Does not work with the following:

/home/user/Desktop/Projects/llama.cpp/main --interactive --mlock --ctx_size 4096 --temp 0.239 --top_k 200 --top_p 0.945 --repeat_last_n 512 --batch_size 4096 --repeat_penalty 1.0 --keep -1 --model /home/user/Desktop/Projects/LLaMA/wizardlm-1.0-uncensored-codellama-34b.Q5_K_M.gguf --threads 16 --n_predict 4096 --reverse-prompt User: --n-gpu-layers 16 --prompt "A transcript of a dialog, where the User interacts with his servant named Mia. Mia is an expert in all subjects.123"

To be clear this prompt may work when llama.cpp is compiled without LLAMA_CLBLAST=ON

Current Behavior

llm_load_tensors: using OpenCL for GPU acceleration
llm_load_tensors: mem required  = 15233.81 MB (+  768.00 MB per state)
llm_load_tensors: offloading 16 repeating layers to GPU
llm_load_tensors: offloaded 16/49 layers to GPU
llm_load_tensors: VRAM used: 7501 MB
....................................................................................................
llama_new_context_with_model: kv self size  =  768.00 MB
llama_new_context_with_model: compute buffer total size = 4481.49 MB

system_info: n_threads = 16 / 24 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | 
main: interactive mode on.
Reverse prompt: 'User:'
sampling: repeat_last_n = 512, repeat_penalty = 1.000000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 200, tfs_z = 1.000000, top_p = 0.945000, typical_p = 1.000000, temp = 0.239000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000
generate: n_ctx = 4096, n_batch = 4096, n_predict = 4096, n_keep = 32

== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to LLaMa.
 - To return control without starting a new line, end your input with '/'.
 - If you want to submit another line, end your input with '\'.

 A transcript of a dialog, where the User interacts with his servant named Mia. Mia is an expert in all subjects.123GGML_ASSERT: ggml.c:11270: ne02 == ne12
Aborted (core dumped)

Environment and Context

Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.

llama.cpp]$ clinfo
Number of platforms                               1
  Platform Name                                   AMD Accelerated Parallel Processing
  Platform Vendor                                 Advanced Micro Devices, Inc.
  Platform Version                                OpenCL 2.1 AMD-APP (3581.0)
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_icd cl_amd_event_callback 
  Platform Extensions function suffix             AMD
  Platform Host timer resolution                  1ns

  Platform Name                                   AMD Accelerated Parallel Processing
Number of devices                                 1
  Device Name                                     gfx1030
  Device Vendor                                   Advanced Micro Devices, Inc.
  Device Vendor ID                                0x1002
  Device Version                                  OpenCL 2.0 
  Driver Version                                  3581.0 (HSA1.1,LC)
  Device OpenCL C Version                         OpenCL C 2.0 
  Device Type                                     GPU
  Device Board Name (AMD)                         AMD Radeon RX 6900 XT
  Device PCI-e ID (AMD)                           0x73af
  Device Topology (AMD)                           PCI-E, 0000:2f:00.0
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Max compute units                               40
  SIMD per compute unit (AMD)                     4
  SIMD width (AMD)                                32
  SIMD instruction width (AMD)                    1
  Max clock frequency                             2720MHz
  Graphics IP (AMD)                               10.3
  Device Partition                                (core)
    Max number of sub-devices                     40
    Supported partition types                     None
    Supported affinity domains                    (n/a)
  Max work item dimensions                        3
  Max work item sizes                             1024x1024x1024
  Max work group size                             256
  Preferred work group size (AMD)                 256
  Max work group size (AMD)                       1024
  Preferred work group size multiple (kernel)     32
  Wavefront width (AMD)                           32
  Preferred / native vector sizes                 
    char                                                 4 / 4       
    short                                                2 / 2       
    int                                                  1 / 1       
    long                                                 1 / 1       
    half                                                 1 / 1        (cl_khr_fp16)
    float                                                1 / 1       
    double                                               1 / 1        (cl_khr_fp64)
  Half-precision Floating-point support           (cl_khr_fp16)
    Denormals                                     No
    Infinity and NANs                             No
    Round to nearest                              No
    Round to zero                                 No
    Round to infinity                             No
    IEEE754-2008 fused multiply-add               No
    Support is emulated in software               No
  Single-precision Floating-point support         (core)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  Yes
  Double-precision Floating-point support         (cl_khr_fp64)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
  Address bits                                    64, Little-Endian
  Global memory size                              17163091968 (15.98GiB)
  Global free memory (AMD)                        16556032 (15.79GiB) 16556032 (15.79GiB)
  Global memory channels (AMD)                    8
  Global memory banks per channel (AMD)           4
  Global memory bank width (AMD)                  256 bytes
  Error Correction support                        No
  Max memory allocation                           14588628168 (13.59GiB)
  Unified memory for Host and Device              No
  Shared Virtual Memory (SVM) capabilities        (core)
    Coarse-grained buffer sharing                 Yes
    Fine-grained buffer sharing                   Yes
    Fine-grained system sharing                   No
    Atomics                                       No
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       1024 bits (128 bytes)
  Preferred alignment for atomics                 
    SVM                                           0 bytes
    Global                                        0 bytes
    Local                                         0 bytes
  Max size for global variable                    14588628168 (13.59GiB)
  Preferred total size of global vars             17163091968 (15.98GiB)
  Global Memory cache type                        Read/Write
  Global Memory cache size                        16384 (16KiB)
  Global Memory cache line size                   64 bytes
  Image support                                   Yes
    Max number of samplers per kernel             29615
    Max size for 1D images from buffer            134217728 pixels
    Max 1D or 2D image array size                 8192 images
    Base address alignment for 2D image buffers   256 bytes
    Pitch alignment for 2D image buffers          256 pixels
    Max 2D image size                             16384x16384 pixels
    Max 3D image size                             16384x16384x8192 pixels
    Max number of read image args                 128
    Max number of write image args                8
    Max number of read/write image args           64
  Max number of pipe args                         16
  Max active pipe reservations                    16
  Max pipe packet size                            1703726280 (1.587GiB)
  Local memory type                               Local
  Local memory size                               65536 (64KiB)
  Local memory size per CU (AMD)                  65536 (64KiB)
  Local memory banks (AMD)                        32
  Max number of constant args                     8
  Max constant buffer size                        14588628168 (13.59GiB)
  Preferred constant buffer size (AMD)            16384 (16KiB)
  Max size of kernel argument                     1024
  Queue properties (on host)                      
    Out-of-order execution                        No
    Profiling                                     Yes
  Queue properties (on device)                    
    Out-of-order execution                        Yes
    Profiling                                     Yes
    Preferred size                                262144 (256KiB)
    Max size                                      8388608 (8MiB)
  Max queues on device                            1
  Max events on device                            1024
  Prefer user sync for interop                    Yes
  Number of P2P devices (AMD)                     0
  Profiling timer resolution                      1ns
  Profiling timer offset since Epoch (AMD)        0ns (Wed Dec 31 19:00:00 1969)
  Execution capabilities                          
    Run OpenCL kernels                            Yes
    Run native kernels                            No
    Thread trace supported (AMD)                  No
    Number of async queues (AMD)                  8
    Max real-time compute queues (AMD)            8
    Max real-time compute units (AMD)             40
  printf() buffer size                            4194304 (4MiB)
  Built-in kernels                                (n/a)
  Device Extensions                               cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program 

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  No platform
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   No platform
  clCreateContext(NULL, ...) [default]            No platform
  clCreateContext(NULL, ...) [other]              Success [AMD]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  Success (1)
    Platform Name                                 AMD Accelerated Parallel Processing
    Device Name                                   gfx1030
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  Success (1)
    Platform Name                                 AMD Accelerated Parallel Processing
    Device Name                                   gfx1030
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  Success (1)
    Platform Name                                 AMD Accelerated Parallel Processing
    Device Name                                   gfx1030

Linux phoenix-pc 6.4.12-zen1-1-zen #1 ZEN SMP PREEMPT_DYNAMIC Thu, 24 Aug 2023 00:37:46 +0000 x86_64 GNU/Linux

(Running Arch Linux with Zen Kernel)

Python 3.11.5
GNU Make 4.4.1
Built for x86_64-pc-linux-gnu
g++ (GCC) 13.2.1 20230801

Failure Information (for bugs)

When llama.cpp is compiled using LLAMA_CLBLAST=ON option, it doesn't handle long prompts (longer than 114-116 characters).

Steps to Reproduce

Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.

  1. Compile using make LLAMA_CLBLAST=ON
  2. Run on any llama GGUF model or use the same as above ( https://huggingface.co/TheBloke/WizardLM-1.0-Uncensored-CodeLlama-34B-GGUF )
  3. Use the above options
  4. Use a 120 character prompt

Failure Logs


GGML_ASSERT: ggml.c:11270: ne02 == ne12
Aborted (core dumped)

Environment info:


llama.cpp$ git log | head -1
commit 8781013ef654270cbead3e0011e33a6d690fb168
shibe2 commented 1 year ago

Larger Llama 2 models use GQA, which uses an operation that is currently not supported by CLBlast back-end. I'm working on implementing it. #3002

MaggotHATE commented 1 year ago

I've copied the changes to my project, and it seems to fix the issue. Tested on phind-codellama-34b-v1.Q4_K_S.gguf, both with latest main commits and with custom attention mask.

shibe2 commented 1 year ago

Great news!

shibe2 commented 1 year ago

My current work is here. I recommend it over my previous attempts, if you want to use it. Although the results should be equivalent, I've done much more testing on the latest code. Detailed discussion in #3002.

MaggotHATE commented 1 year ago

Seems like the new version works too. No problems with llama models, although I've only tested smaller ones (7b and 13b). Previously tested phind-codellama-34b-v1.Q4_K_S.gguf still works.

On a sidenote, it also works with stable-diffusion.cpp as well (although it doesn't have layers offloading at the moment, there is a small speedup in generation with cblbast). So I assume it doesn't break anything in general?

paralin commented 1 year ago

@shibe2 Thanks! Tested your branch: https://github.com/shibe2/llama.cpp/commit/f5ed18bfa71878fffa4733d75095952e407a62f7 with a AMD GPU and LLAMA_CLBLAST with mistral-7b-v0.1.Q4_0.gguf - works great!

MaggotHATE commented 1 year ago

Yep, just encountered this exact problem of ne02 == ne12 on mistral-7b-instruct-v0.1.Q4_K_S.gguf while testing latest commit server. Seems like this problem is not exclusive to 34b models now. The fix still works. @shibe2 Thank you for this patch!

paschembri commented 1 year ago

Same here. Thanks @shibe2 !

shibe2 commented 1 year ago

Yeah, Mistral 7B uses GQA too.

paralin commented 1 year ago

Thanks @shibe2 !

mzeq1717 commented 1 year ago

hey, I'm having the same error when using the llama-cpp-python library (within langchain). I saw that there is a solution for the llama-cpp package, but I don't understand how to use this when installing the llama-cpp-python package or if it is even possible. I'm not an expert with setting up an working environment, so any help would be very appreciated from my side.

shibe2 commented 1 year ago

@mzeq1717 The Python package must be up to date with the latest changes in llama.cpp. If you build it from the main branch now, it should include support for GQA and other fixes in OpenCL back-end.