artyom-beilis / pytorch_dlprim

DLPrimitives/OpenCL out of tree backend for pytorch
http://blog.dlprimitives.org/
MIT License
227 stars 16 forks source link

GPT2 model - Running sums are expected to be present #27

Open arch-user-france1 opened 1 year ago

arch-user-france1 commented 1 year ago

First of all, I'd like to say thank you to everyone that supported that work. Amazing :)

Unfortunately, your library does not work in combination with the GPT2 & GPT2-Large models (and probably any GPT2 model). In the first evaluation already, Torch crashes with a RuntimeError: Running sums are expected to be present, which does not occur with the minist testing script. That runs fine.

I am using Huggingface's gpt2-large model which worked fine on an AMD processor. But, of course, it was slow...

Here is the code as referrence - your system probably is missing termcolor: pip install termcolor.

import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
from termcolor import colored as c

torch.ops.load_library("/lib/libpt_ocl.so")
#torch.utils.rename_privateuse1_backend('ocl')

tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
model = GPT2LMHeadModel.from_pretrained('gpt2-large').to(torch.device('privateuseone:0'))

prompt = "Once upon a time"
input_ids = tokenizer.encode(prompt, return_tensors='pt')

temperature = 0.8
generated_text = ""
last_token = ""

print(c(prompt, "blue"), end="")
while True:
    logits = model(input_ids)[0][:, -1, :]
    logits /= temperature
    probabilities = torch.softmax(logits, dim=-1).squeeze()
    next_token_id = torch.multinomial(probabilities, 1)

    # Append the token to the input_ids tensor
    input_ids = torch.cat([input_ids, next_token_id.unsqueeze(-1)], dim=-1)

    # Decode the generated tokens
    #token = tokenizer.decode(input_ids[0])
    token = tokenizer.decode(next_token_id)
    generated_text += token
    print(len(last_token) * "\b" + c(last_token, "magenta"), end="")
    print(c(token, "cyan"), end="", flush=True)
    last_token = token

print()

It would be amazing of someone could find the problem and fix it. Apparently hip does not support my AMD GPU properly. I'm using the Radeon RX 7900 XT.

Here is the whole log:

2023-02-27 15:30:23.154167: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-02-27 15:30:23.615798: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/cuda/lib64:
2023-02-27 15:30:23.615850: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/cuda/lib64:
2023-02-27 15:30:23.615857: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Accessing device #0:gfx1100 on AMD Accelerated Parallel Processing
/home/france1/ZFS/AI/.conda/lib/python3.10/site-packages/torch/nn/functional.py:2210: UserWarning: The operator 'aten::index_select' is not currently supported on the ocl backend. Please open an issue at for requesting support https://github.com/artyom-beilis/pytorch_dlprim/issues (Triggered internally at /home/france1/ZFS/AI/AICompletion/opencl/pytorch_dlprim/src/tensor_ops.cpp:302.)
  return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
Once upon a timeTraceback (most recent call last):
  File "/home/france1/ZFS/AI/AICompletion/hface/gpttime.py", line 22, in <module>
    logits = model(input_ids)[0][:, -1, :]
  File "/home/france1/ZFS/AI/.conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/france1/.local/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1043, in forward
    transformer_outputs = self.transformer(
  File "/home/france1/ZFS/AI/.conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/france1/.local/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 887, in forward
    outputs = block(
  File "/home/france1/ZFS/AI/.conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/france1/.local/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 387, in forward
    hidden_states = self.ln_1(hidden_states)
  File "/home/france1/ZFS/AI/.conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/france1/ZFS/AI/.conda/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 190, in forward
    return F.layer_norm(
  File "/home/france1/ZFS/AI/.conda/lib/python3.10/site-packages/torch/nn/functional.py", line 2515, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Running sums are expected to be present
arch-user-france1 commented 1 year ago

You'd probably like to see this, too:

➜  ~ clinfo                                    
Number of platforms                               1
  Platform Name                                   AMD Accelerated Parallel Processing
  Platform Vendor                                 Advanced Micro Devices, Inc.
  Platform Version                                OpenCL 2.1 AMD-APP (3513.0)
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_icd cl_amd_event_callback 
  Platform Extensions function suffix             AMD
  Platform Host timer resolution                  1ns

  Platform Name                                   AMD Accelerated Parallel Processing
Number of devices                                 1
  Device Name                                     gfx1100
  Device Vendor                                   Advanced Micro Devices, Inc.
  Device Vendor ID                                0x1002
  Device Version                                  OpenCL 2.0 
  Driver Version                                  3513.0 (HSA1.1,LC)
  Device OpenCL C Version                         OpenCL C 2.0 
  Device Type                                     GPU
  Device Board Name (AMD)                         Radeon RX 7900 XT
  Device PCI-e ID (AMD)                           0x744c
  Device Topology (AMD)                           PCI-E, 0000:0a:00.0
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Max compute units                               42
  SIMD per compute unit (AMD)                     4
  SIMD width (AMD)                                32
  SIMD instruction width (AMD)                    1
  Max clock frequency                             3125MHz
  Graphics IP (AMD)                               11.0
  Device Partition                                (core)
    Max number of sub-devices                     42
    Supported partition types                     None
    Supported affinity domains                    (n/a)
  Max work item dimensions                        3
  Max work item sizes                             1024x1024x1024
  Max work group size                             256
  Preferred work group size (AMD)                 256
  Max work group size (AMD)                       1024
  Preferred work group size multiple (kernel)     32
  Wavefront width (AMD)                           32
  Preferred / native vector sizes                 
    char                                                 4 / 4       
    short                                                2 / 2       
    int                                                  1 / 1       
    long                                                 1 / 1       
    half                                                 1 / 1        (cl_khr_fp16)
    float                                                1 / 1       
    double                                               1 / 1        (cl_khr_fp64)
  Half-precision Floating-point support           (cl_khr_fp16)
    Denormals                                     No
    Infinity and NANs                             No
    Round to nearest                              No
    Round to zero                                 No
    Round to infinity                             No
    IEEE754-2008 fused multiply-add               No
    Support is emulated in software               No
  Single-precision Floating-point support         (core)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  Yes
  Double-precision Floating-point support         (cl_khr_fp64)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
  Address bits                                    64, Little-Endian
  Global memory size                              21458059264 (19.98GiB)
  Global free memory (AMD)                        20955136 (19.98GiB) 20955136 (19.98GiB)
  Global memory channels (AMD)                    10
  Global memory banks per channel (AMD)           4
  Global memory bank width (AMD)                  256 bytes
  Error Correction support                        No
  Max memory allocation                           18239350368 (16.99GiB)
  Unified memory for Host and Device              No
  Shared Virtual Memory (SVM) capabilities        (core)
    Coarse-grained buffer sharing                 Yes
    Fine-grained buffer sharing                   Yes
    Fine-grained system sharing                   No
    Atomics                                       No
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       1024 bits (128 bytes)
  Preferred alignment for atomics                 
    SVM                                           0 bytes
    Global                                        0 bytes
    Local                                         0 bytes
  Max size for global variable                    18239350368 (16.99GiB)
  Preferred total size of global vars             21458059264 (19.98GiB)
  Global Memory cache type                        Read/Write
  Global Memory cache size                        32768 (32KiB)
  Global Memory cache line size                   64 bytes
  Image support                                   Yes
    Max number of samplers per kernel             29772
    Max size for 1D images from buffer            134217728 pixels
    Max 1D or 2D image array size                 8192 images
    Base address alignment for 2D image buffers   256 bytes
    Pitch alignment for 2D image buffers          256 pixels
    Max 2D image size                             16384x16384 pixels
    Max 3D image size                             16384x16384x8192 pixels
    Max number of read image args                 128
    Max number of write image args                8
    Max number of read/write image args           64
  Max number of pipe args                         16
  Max active pipe reservations                    16
  Max pipe packet size                            1059481184 (1010MiB)
  Local memory type                               Local
  Local memory size                               65536 (64KiB)
  Local memory size per CU (AMD)                  65536 (64KiB)
  Local memory banks (AMD)                        32
  Max number of constant args                     8
  Max constant buffer size                        18239350368 (16.99GiB)
  Preferred constant buffer size (AMD)            16384 (16KiB)
  Max size of kernel argument                     1024
  Queue properties (on host)                      
    Out-of-order execution                        No
    Profiling                                     Yes
  Queue properties (on device)                    
    Out-of-order execution                        Yes
    Profiling                                     Yes
    Preferred size                                262144 (256KiB)
    Max size                                      8388608 (8MiB)
  Max queues on device                            1
  Max events on device                            1024
  Prefer user sync for interop                    Yes
  Number of P2P devices (AMD)                     0
  Profiling timer resolution                      1ns
  Profiling timer offset since Epoch (AMD)        0ns (Thu Jan  1 01:00:00 1970)
  Execution capabilities                          
    Run OpenCL kernels                            Yes
    Run native kernels                            No
    Thread trace supported (AMD)                  No
    Number of async queues (AMD)                  8
    Max real-time compute queues (AMD)            8
    Max real-time compute units (AMD)             42
  printf() buffer size                            4194304 (4MiB)
  Built-in kernels                                (n/a)
  Device Extensions                               cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program 

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  No platform
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   No platform
  clCreateContext(NULL, ...) [default]            No platform
  clCreateContext(NULL, ...) [other]              Success [AMD]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  Success (1)
    Platform Name                                 AMD Accelerated Parallel Processing
    Device Name                                   gfx1100
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  Success (1)
    Platform Name                                 AMD Accelerated Parallel Processing
    Device Name                                   gfx1100
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  Success (1)
    Platform Name                                 AMD Accelerated Parallel Processing
    Device Name                                   gfx1100
artyom-beilis commented 1 year ago

Unfortunately not all ops are implemented. Far from that.

I'm working on implementing transformers support but didn't finished yet.

arch-user-france1 commented 1 year ago

Great! I wonder, how much work it is to implement and how much time you might need? Do you know?

Could as well move to a discussion... Thank you.

artyom-beilis commented 1 year ago

Great! I wonder, how much work it is to implement and how much time you might need? Do you know?

It isn't about how much time it takes. Also norm layer tricky the implementation shouldn't be very hard. You can see some details of what needed there: https://github.com/artyom-beilis/pytorch_dlprim/discussions/16

It is more about my available time (you know, the job that pays, family and other stuff called life that interferes with the development ;-)

arch-user-france1 commented 1 year ago

Of course, I'm sorry... That is why I have asked. I do not know how strict plans you have in your life, but maybe just a rough guess would help. I'd be thankful... You should really also tell me when you feel like you don't want to continue the project, since I'll search for another solution to it, then. If only ROCm would work :(

Because I do not have much knowledge I have no idea how much work it is, so, as stated before, a rough guess would help me. Please do not feel stressed just because I have asked you how long it could theoretically take.

artyom-beilis commented 1 year ago

You should really also tell me when you feel like you don't want to continue the project, since I'll search for another solution to it, then. If only ROCm would work :(

I'm absolutely want and will continue the project. It is too much valuable.