artyom-beilis / pytorch_dlprim

DLPrimitives/OpenCL out of tree backend for pytorch
http://blog.dlprimitives.org/
MIT License
227 stars 16 forks source link

SEGFAULT, when loaded from C++ frontend #49

Open tada123 opened 7 months ago

tada123 commented 7 months ago

Loaded the library using dlopen("libpt_ocl.so", RTLD_GLOBAL), but following segfaults:

    torch::register_privateuse1_backend("ocl");
    torch::Device device("ocl:0");
    float a[2] = {0.2f, 0.3f};
    torch::Tensor tensor = torch::from_blob(a, {2});
    std::cout << "tensor: " << tensor << std::endl;
    torch::Tensor octensor = tensor.to(device); //THIS LINE CAUSES SEGFAULT

Equivalent code in python works:

import os
import torch
torch.ops.load_library("libpt_ocl.so")
t = torch.tensor([0.5, 0.3])
t2 = torch.tensor([0.2, 0.4])
torch.utils.rename_privateuse1_backend('ocl')
print("Creating ocl")
dev = torch.device('ocl:0')
print("Moving to ocl")
o = t.to(dev)
artyom-beilis commented 7 months ago

What is pytorch version you build against? What is this device 0? Please give output of clinfo -l or clinfo

artyom-beilis commented 7 months ago

Ahhh I see it is C++.

Can you give full C++ code sample and makefile or something. I honestly hadn't tested it against C++ yet.

tada123 commented 7 months ago

Hello, thanks for your quick response. Here is the output of clinfo command:

>>> clinfo
Number of platforms                               2
  Platform Name                                   Clover
  Platform Vendor                                 Mesa
  Platform Version                                OpenCL 1.1 Mesa 23.2.1-arch1.2
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_icd
  Platform Extensions function suffix             MESA

  Platform Name                                   AMD Accelerated Parallel Processing
  Platform Vendor                                 Advanced Micro Devices, Inc.
  Platform Version                                OpenCL 2.1 AMD-APP (3380.4)
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_icd cl_amd_event_callback cl_amd_offline_devices 
  Platform Extensions function suffix             AMD
  Platform Host timer resolution                  1ns

  Platform Name                                   Clover
Number of devices                                 1
  Device Name                                     AMD Radeon RX 560 Series (polaris11, LLVM 16.0.6, DRM 3.49, 6.1.67-1-lts)
  Device Vendor                                   AMD
  Device Vendor ID                                0x1002
  Device Version                                  OpenCL 1.1 Mesa 23.2.1-arch1.2
  Device Numeric Version                          0x401000 (1.1.0)
  Driver Version                                  23.2.1-arch1.2
  Device OpenCL C Version                         OpenCL C 1.1 
  Device Type                                     GPU
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Max compute units                               14
  Max clock frequency                             1176MHz
  Max work item dimensions                        3
  Max work item sizes                             256x256x256
  Max work group size                             256
  Preferred work group size multiple (kernel)     64
  Preferred / native vector sizes                 
    char                                                16 / 16      
    short                                                8 / 8       
    int                                                  4 / 4       
    long                                                 2 / 2       
    half                                                 0 / 0        (n/a)
    float                                                4 / 4       
    double                                               2 / 2        (cl_khr_fp64)
  Half-precision Floating-point support           (n/a)
  Single-precision Floating-point support         (core)
    Denormals                                     No
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 No
    Round to infinity                             No
    IEEE754-2008 fused multiply-add               No
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  No
  Double-precision Floating-point support         (cl_khr_fp64)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
  Address bits                                    64, Little-Endian
  Global memory size                              4294967296 (4GiB)
  Error Correction support                        No
  Max memory allocation                           1073741824 (1024MiB)
  Unified memory for Host and Device              No
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       32768 bits (4096 bytes)
  Global Memory cache type                        None
  Image support                                   No
  Local memory type                               Local
  Local memory size                               65536 (64KiB)
  Max number of constant args                     16
  Max constant buffer size                        67108864 (64MiB)
  Max size of kernel argument                     1024
  Queue properties                                
    Out-of-order execution                        No
    Profiling                                     Yes
  Profiling timer resolution                      0ns
  Execution capabilities                          
    Run OpenCL kernels                            Yes
    Run native kernels                            No
    ILs with version                              SPIR-V                                                           0x400000 (1.0.0)
  Built-in kernels with version                   (n/a)
  Device Extensions                               cl_khr_byte_addressable_store cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_fp64 cl_khr_extended_versioning
  Device Extensions with Version                  cl_khr_byte_addressable_store                                    0x400000 (1.0.0)
                                                  cl_khr_global_int32_base_atomics                                 0x400000 (1.0.0)
                                                  cl_khr_global_int32_extended_atomics                             0x400000 (1.0.0)
                                                  cl_khr_local_int32_base_atomics                                  0x400000 (1.0.0)
                                                  cl_khr_local_int32_extended_atomics                              0x400000 (1.0.0)
                                                  cl_khr_int64_base_atomics                                        0x400000 (1.0.0)
                                                  cl_khr_int64_extended_atomics                                    0x400000 (1.0.0)
                                                  cl_khr_fp64                                                      0x400000 (1.0.0)
                                                  cl_khr_extended_versioning                                       0x400000 (1.0.0)

  Platform Name                                   AMD Accelerated Parallel Processing
Number of devices                                 1
  Device Name                                     Baffin
  Device Vendor                                   Advanced Micro Devices, Inc.
  Device Vendor ID                                0x1002
  Device Version                                  OpenCL 2.0 AMD-APP (3380.4)
  Driver Version                                  3380.4 (PAL,HSAIL)
  Device OpenCL C Version                         OpenCL C 2.0 
  Device Type                                     GPU
  Device Board Name (AMD)                         AMD Radeon RX 560 Series
  Device PCI-e ID (AMD)                           0x67ef
  Device Topology (AMD)                           PCI-E, 0000:01:00.0
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Max compute units                               14
  SIMD per compute unit (AMD)                     4
  SIMD width (AMD)                                16
  SIMD instruction width (AMD)                    1
  Max clock frequency                             1176MHz
  Graphics IP (AMD)                               8.0
  Device Partition                                (core)
    Max number of sub-devices                     14
    Supported partition types                     None
    Supported affinity domains                    (n/a)
  Max work item dimensions                        3
  Max work item sizes                             1024x1024x1024
  Max work group size                             256
  Preferred work group size (AMD)                 256
  Max work group size (AMD)                       1024
  Preferred work group size multiple (kernel)     64
  Wavefront width (AMD)                           64
  Preferred / native vector sizes                 
    char                                                 4 / 4       
    short                                                2 / 2       
    int                                                  1 / 1       
    long                                                 1 / 1       
    half                                                 1 / 1        (cl_khr_fp16)
    float                                                1 / 1       
    double                                               1 / 1        (cl_khr_fp64)
  Half-precision Floating-point support           (cl_khr_fp16)
    Denormals                                     No
    Infinity and NANs                             No
    Round to nearest                              No
    Round to zero                                 No
    Round to infinity                             No
    IEEE754-2008 fused multiply-add               No
    Support is emulated in software               No
  Single-precision Floating-point support         (core)
    Denormals                                     No
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  Yes
  Double-precision Floating-point support         (cl_khr_fp64)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
  Address bits                                    64, Little-Endian
  Global memory size                              4294967296 (4GiB)
  Global free memory (AMD)                        4128768 (3.938GiB) 3866624 (3.688GiB)
  Global memory channels (AMD)                    4
  Global memory banks per channel (AMD)           4
  Global memory bank width (AMD)                  256 bytes
  Error Correction support                        No
  Max memory allocation                           3422552064 (3.188GiB)
  Unified memory for Host and Device              No
  Shared Virtual Memory (SVM) capabilities        (core)
    Coarse-grained buffer sharing                 Yes
    Fine-grained buffer sharing                   Yes
    Fine-grained system sharing                   No
    Atomics                                       No
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       2048 bits (256 bytes)
  Preferred alignment for atomics                 
    SVM                                           0 bytes
    Global                                        0 bytes
    Local                                         0 bytes
  Max size for global variable                    3080296704 (2.869GiB)
  Preferred total size of global vars             4294967296 (4GiB)
  Global Memory cache type                        Read/Write
  Global Memory cache size                        16384 (16KiB)
  Global Memory cache line size                   64 bytes
  Image support                                   Yes
    Max number of samplers per kernel             16
    Max size for 1D images from buffer            213909504 pixels
    Max 1D or 2D image array size                 2048 images
    Base address alignment for 2D image buffers   256 bytes
    Pitch alignment for 2D image buffers          256 pixels
    Max 2D image size                             16384x16384 pixels
    Max 3D image size                             2048x2048x2048 pixels
    Max number of read image args                 128
    Max number of write image args                64
    Max number of read/write image args           64
  Max number of pipe args                         16
  Max active pipe reservations                    16
  Max pipe packet size                            3422552064 (3.188GiB)
  Local memory type                               Local
  Local memory size                               65536 (64KiB)
  Local memory size per CU (AMD)                  65536 (64KiB)
  Local memory banks (AMD)                        32
  Max number of constant args                     8
  Max constant buffer size                        3422552064 (3.188GiB)
  Preferred constant buffer size (AMD)            16384 (16KiB)
  Max size of kernel argument                     1024
  Queue properties (on host)                      
    Out-of-order execution                        No
    Profiling                                     Yes
  Queue properties (on device)                    
    Out-of-order execution                        Yes
    Profiling                                     Yes
    Preferred size                                262144 (256KiB)
    Max size                                      8388608 (8MiB)
  Max queues on device                            1
  Max events on device                            1024
  Prefer user sync for interop                    Yes
  Number of P2P devices (AMD)                     0
  Profiling timer resolution                      1ns
  Profiling timer offset since Epoch (AMD)        1702382299057065703ns (Tue Dec 12 12:58:19 2023)
  Execution capabilities                          
    Run OpenCL kernels                            Yes
    Run native kernels                            No
    Thread trace supported (AMD)                  Yes
    Number of async queues (AMD)                  4
    Max real-time compute queues (AMD)            1
    Max real-time compute units (AMD)             0
  printf() buffer size                            4194304 (4MiB)
  Built-in kernels                                (n/a)
  Device Extensions                               cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_khr_gl_depth_images cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_gl_event cl_khr_depth_images cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_amd_copy_buffer_p2p 

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  No platform
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   No platform
  clCreateContext(NULL, ...) [default]            No platform
  clCreateContext(NULL, ...) [other]              Success [MESA]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  Success (1)
    Platform Name                                 Clover
    Device Name                                   AMD Radeon RX 560 Series (polaris11, LLVM 16.0.6, DRM 3.49, 6.1.67-1-lts)
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  Success (1)
    Platform Name                                 Clover
    Device Name                                   AMD Radeon RX 560 Series (polaris11, LLVM 16.0.6, DRM 3.49, 6.1.67-1-lts)
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  Success (1)
    Platform Name                                 Clover
    Device Name                                   AMD Radeon RX 560 Series (polaris11, LLVM 16.0.6, DRM 3.49, 6.1.67-1-lts)

The code is very simple, it just tries to copy torch::Tensor to the ocl device:


#include <torch/all.h>

#include <vector>
#include <stdio.h>
#include <dlfcn.h>
extern "C"{
    #ifdef BUILD_AS_LIB
    int run(){
    #else
    int main(int argc, char** argv){
    #endif
        void* lib = dlopen("libpt_ocl.so", RTLD_NOW);
        torch::register_privateuse1_backend("ocl");
        torch::Device dev("ocl:0");
        torch::Tensor t = torch::ones(2);
        torch::Tensor ot = t.to(dev);
        return 0;
    }
}

Compiled with: /usr/bin/g++ -o TorchTest -l torch -l c10 -l protobuf-lite -l protobuf -l protoc -l torch_cpu -Wl,--no-as-needed main.cpp

Also, i discovered a strange thing, that when the C++ code is compiled as a library (using -shared -fPIC) and executed from python interpreter by using only import ctypes; ctypes.CDLL("libTorchTest.so").run(), everything works OK (even when in the case without python interpreter, gdb shows, that the problem comes from libamdocl-orca64.so AMD Blob).

So maybe, the problem could be some uninitialized libpython global variable.

artyom-beilis commented 7 months ago
 void* lib = dlopen("libpt_ocl.so", RTLD_NOW);

You don't check the result... if library isn't loaded it is indeed aborted (not segfault)

If it isn't the issue can you try the second device i.e. ocl:1 since 2nd driver is generally works better and supports OpenCL 2.0 - so it is preferred - and it gave me better performance on 560

tada123 commented 7 months ago

The dlopen returned non-null (debugged using gdb) and unfortunately, the ocl:1 also segfaults. However, it seems, that it's not problem of libpython.so globals. When i try to dlopen the libTestTorch.so (compiled main.cpp and previously loaded using python) from another C++ file:

#include <stdio.h>
#include <dlfcn.h>
#include <cstring>
#include <cstdlib>

typedef int(*RunFunc)();

int main(int argc, char** argv){
    if((argc < 2) || (strcmp(argv[1], "--help") == 0)){
        printf("frontend [libraryPath]");
        exit(1);
    }
    printf("Loading library from %s\n", argv[1]);
    void* lib = dlopen(argv[1], RTLD_NOW); ///RTLD_GLOBAL also leads to SEGFAULT!!!!
    if(!lib){
        fprintf(stderr, "ERROR: Cannot load library from \"%s\"\n", argv[1]);
        exit(5);
    }
    void* func = dlsym(lib, "run");
    if(!func){
        puts("ERROR: \"run\" function not present in library");
        exit(5);
    }
    printf("Function returned: %d", ((RunFunc) func)());
}

The problem also disappears, but once the libTorchTest.so is linked to the new file (g++ -l TorchTest another.cpp), the same error occurs even with the dlopen + when not linked to, but loading with RTLD_GLOBAL flag also leads to segfault. (So only works, if the source file is dlopened by another executable and symbols are not made public)

Another strange thing is, that when running python interpreter from C++, the app also receives SEGV at the PyRun_SimpleString when running the ot = t.to(dev) command (gcc command same as for main.cpp):

#include <stdio.h>
#include <dlfcn.h>
#include <cstring>
#include <cstdlib>
#include <python3.11/Python.h>

int main(){
        Py_Initialize();
        PyRun_SimpleString("import torch; torch.ops.load_library(\"/run/media/tada/1976709e-d15f-4ba0-8cb3-5f34ce866960/Temp/pytorch_dlprim/build/libpt_ocl.so\"); torch.utils.rename_privateuse1_backend('ocl'); print(\"dev\"); dev = torch.device('ocl:0'); print(\"tens\"); t = torch.tensor([0.2, 0.5]); print("Moving to ocl"); ot = t.to(dev); print(\"otprint\"); print(ot)");

        Py_FinalizeEx();
        PyMem_RawFree(pyprogname);
        return 0;
}
tada123 commented 7 months ago

UPDATE: According to the stack-trace, it seems, that the error comes from OpenCL library, but i still don't know, why it only works, when it's loaded by dlopen. Here is a program stack-trace, which may be helpful to find the issue.

Thread 1 "frontend" received signal SIGSEGV, Segmentation fault.
0x00007ffff7f741b0 in amdgpu_cs_ctx_free () from /usr/lib/libdrm_amdgpo.so.1
(gdb) bt
#0  0x00007ffff7f741b0 in amdgpu_cs_ctx_free () at /usr/lib/libdrm_amdgpo.so.1
#1  0x00007ffddf138024 in  () at /usr/lib/libamdocl-orca64.so
#2  0x00007ffddf129e09 in  () at /usr/lib/libamdocl-orca64.so
#3  0x00007ffddf12a0ad in  () at /usr/lib/libamdocl-orca64.so
#4  0x00007ffddf12a0f6 in  () at /usr/lib/libamdocl-orca64.so
#5  0x00007ffddf2b2084 in  () at /usr/lib/libamdocl-orca64.so
#6  0x00007ffddf298364 in  () at /usr/lib/libamdocl-orca64.so
#7  0x00007ffddf312cb1 in  () at /usr/lib/libamdocl-orca64.so
#8  0x00007ffddf313416 in  () at /usr/lib/libamdocl-orca64.so
#9  0x00007ffddf061346 in  () at /usr/lib/libamdocl-orca64.so
#10 0x00007ffddf02cc85 in  () at /usr/lib/libamdocl-orca64.so
#11 0x00007ffde1c02209 in  () at /usr/lib/libamdocl-orca64.so
#12 0x00007ffddf02cdbc in clIcdGetPlatformIDsKHR () at /usr/lib/libamdocl-orca64.so
#13 0x00007ffff7f07565 in  () at /opt/rocm/lib/libOpenCL.so.1
#14 0x00007ffff7f09607 in  () at /opt/rocm/lib/libOpenCL.so.1
#15 0x00007ffff79c2bbf in  () at /usr/lib/libc.so.6
#16 0x00007ffff7f07bb6 in clGetPlatformIDs () at /opt/rocm/lib/libOpenCL.so.1
#17 0x00007ffe0114e933 in cl::Platform::get(std::vector<cl::Platform, std::allocator<cl::Platform> >*) (platforms=0x7fffffffc3d0) at /opt/rocm/include/CL/cl2.hpp:2474
#18 0x00007ffe011507e5 in ptdlprim::CLContextManager::allocate() (this=0x5555595557b0) at /run/media/tada/1976709e-d15f-4ba0-8cb3-5f34ce866960/Libs/pytorch_dlprim/src/CLTensor.h:160
#19 0x00007ffe011506a5 in ptdlprim::CLContextManager::init(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&) (self=std::unique_ptr<ptdlprim::CLContextManager> = {...})
    at /run/media/tada/1976709e-d15f-4ba0-8cb3-5f34ce866960/Libs/pytorch_dlprim/src/CLTensor.h:151
#20 0x00007ffe0116bc2d in std::__invoke_impl<void, void (&)(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&), std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&>(std::__invoke_other, void (&)(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&), std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&)
    (__f=@0x7ffe01150655: {void (std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> > &)} 0x7ffe01150655 <ptdlprim::CLContextManager::init(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&)>) at /usr/include/c++/13.2.1/bits/invoke.h:61

#21 0x00007ffe01164285 in std::__invoke<void (&)(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&), std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&>(void (&)(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&), std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&)
    (__fn=@0x7ffe01150655: {void (std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> > &)} 0x7ffe01150655 <ptdlprim::CLContextManager::init(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&)>) at /usr/include/c++/13.2.1/bits/invoke.h:96
#22 0x00007ffe01158854 in std::call_once<void (&)(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&), std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&>(std::once_flag&, void (&)(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&), std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&)::{lambda()#1}::operator()() const
    (__closure=0x7fffffffc660) at /usr/include/c++/13.2.1/mutex:900
#23 0x00007ffe011642b3 in std::once_flag::_Prepare_execution::_Prepare_execution<std::call_once<void (&)(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&), std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&>(std::once_flag&, void (&)(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&), std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&)::{lambda()#1}>(void (&)(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&))::{lambda()#1}::operator()() const (__closure=0x0) at /usr/include/c++/13.2.1/mutex:836
#24 0x00007ffe011642c4 in std::once_flag::_Prepare_execution::_Prepare_execution<std::call_once<void (&)(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&), std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&>(std::once_flag&, void (&)(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&), std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&)::{lambda()#1}>(void (&)(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&))::{lambda()#1}::_FUN() () at /usr/include/c++/13.2.1/mutex:836
#25 0x00007ffff79c2bbf in  () at /usr/lib/libc.so.6
#26 0x00007ffe011459e5 in __gthread_once(__gthread_once_t*, void (*)()) (__once=0x7ffe0126d0d8 <ptdlprim::CLContextManager::instance()::once>, __func=0x7ffff7ce0230 <std::__once_proxy()>) at /usr/include/c++/13.2.1/x86_64-pc-linux-gnu/bits/gthr-default.h:700
#27 0x00007ffe011588b8 in std::call_once<void (&)(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&), std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&>(std::once_flag&, void (&)(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&), std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&)
    (__once=..., __f=@0x7ffe01150655: {void (std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> > &)} 0x7ffe01150655 <ptdlprim::CLContextManager::init(std::unique_ptr<ptdlprim::CLContextManager, std::default_delete<ptdlprim::CLContextManager> >&)>) at /usr/include/c++/13.2.1/mutex:907
#28 0x00007ffe0115048d in ptdlprim::CLContextManager::instance() () at /run/media/tada/1976709e-d15f-4ba0-8cb3-5f34ce866960/Libs/pytorch_dlprim/src/CLTensor.h:75
#29 0x00007ffe011d8069 in ptdlprim::CLContextManager::alloc(int, long) (id=1, size=8) at /run/media/tada/1976709e-d15f-4ba0-8cb3-5f34ce866960/Libs/pytorch_dlprim/src/CLTensor.h:100
#30 0x00007ffe011d819b in ptdlprim::CLContextManager::allocate(c10::Device const&, unsigned long) (dev=..., n=8) at /run/media/tada/1976709e-d15f-4ba0-8cb3-5f34ce866960/Libs/pytorch_dlprim/src/CLTensor.h:115
#31 0x00007ffe011d6676 in ptdlprim::new_ocl_tensor(c10::ArrayRef<long>, c10::Device, c10::ScalarType) (size=..., dev=..., type=c10::ScalarType::Float) at /run/media/tada/1976709e-d15f-4ba0-8cb3-5f34ce866960/Libs/pytorch_dlprim/src/utils.cpp:67
#32 0x00007ffe01189ec6 in ptdlprim::allocate_empty(c10::ArrayRef<long>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, c10::optional<c10::MemoryFormat>) (size=..., dtype=..., device=...)
    at /run/media/tada/1976709e-d15f-4ba0-8cb3-5f34ce866960/Libs/pytorch_dlprim/src/tensor_ops.cpp:27
#33 0x00007ffe01189fb2 in ptdlprim::empty_strided(c10::ArrayRef<long>, c10::ArrayRef<long>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>)
    (size=..., dtype=..., layout=..., device=..., pin_memory=...) at /run/media/tada/1976709e-d15f-4ba0-8cb3-5f34ce866960/Libs/pytorch_dlprim/src/tensor_ops.cpp:34
--Type <RET> for more, q to quit, c to continue without paging--
#34 0x00007ffe0119b8c6 in c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(c10::ArrayRef<long>, c10::ArrayRef<long>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>), at::Tensor, c10::guts::typelist::typelist<c10::ArrayRef<long>, c10::ArrayRef<long>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool> > >::operator()(c10::ArrayRef<long>, c10::ArrayRef<long>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>) (this=0x555559548c70, args#0=..., args#1=..., args#2=..., args#3=..., args#4=..., args#5=...) at /usr/include/ATen/core/boxing/impl/WrapFunctionIntoRuntimeFunctor.h:18
#35 0x00007ffe0119ca4d in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoRuntimeFunctor_<at::Tensor (*)(c10::ArrayRef<long>, c10::ArrayRef<long>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>), at::Tensor, c10::guts::typelist::typelist<c10::ArrayRef<long>, c10::ArrayRef<long>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool> > >, at::Tensor (c10::ArrayRef<long>, c10::ArrayRef<long>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>) (functor=0x555559548c70, args#0=..., args#1=..., args#2=..., args#3=..., args#4=..., args#5=...) at /usr/include/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:464
#36 0x00007fffed052c81 in at::_ops::empty_strided::redispatch(c10::DispatchKeySet, c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>) ()
    at /usr/lib/libtorch_cpu.so
#37 0x00007fffed3f8795 in  () at /usr/lib/libtorch_cpu.so
#38 0x00007fffed0a2d9d in at::_ops::empty_strided::call(c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>) () at /usr/lib/libtorch_cpu.so
#39 0x00007fffec49f487 in  () at /usr/lib/libtorch_cpu.so
#40 0x00007fffec7e7b13 in at::native::_to_copy(at::Tensor const&, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, bool, c10::optional<c10::MemoryFormat>) () at /usr/lib/libtorch_cpu.so
#41 0x00007fffed5c4d7a in  () at /usr/lib/libtorch_cpu.so
#42 0x00007fffeccc48fe in at::_ops::_to_copy::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, bool, c10::optional<c10::MemoryFormat>) ()
    at /usr/lib/libtorch_cpu.so
#43 0x00007fffed3f502b in  () at /usr/lib/libtorch_cpu.so
#44 0x00007fffeccc48fe in at::_ops::_to_copy::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, bool, c10::optional<c10::MemoryFormat>) ()
    at /usr/lib/libtorch_cpu.so
#45 0x00007fffef86f733 in  () at /usr/lib/libtorch_cpu.so
#46 0x00007fffef86fc63 in  () at /usr/lib/libtorch_cpu.so
#47 0x00007fffecd78046 in at::_ops::_to_copy::call(at::Tensor const&, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, bool, c10::optional<c10::MemoryFormat>) () at /usr/lib/libtorch_cpu.so
#48 0x00007fffec7ddba8 in at::native::to(at::Tensor const&, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, bool, bool, c10::optional<c10::MemoryFormat>) () at /usr/lib/libtorch_cpu.so
#49 0x00007fffed78ae90 in  () at /usr/lib/libtorch_cpu.so
#50 0x00007fffecf1b67d in at::_ops::to_dtype_layout::call(at::Tensor const&, c10::optional<c10::ScalarType>, c10::optional<c10::Layout>, c10::optional<c10::Device>, c10::optional<bool>, bool, bool, c10::optional<c10::MemoryFormat>) ()
    at /usr/lib/libtorch_cpu.so
#51 0x00007ffff7fa5e08 in at::Tensor::to(c10::TensorOptions, bool, bool, c10::optional<c10::MemoryFormat>) const (this=0x7fffffffd820, options=..., non_blocking=false, copy=false, memory_format=...) at /usr/include/ATen/core/TensorBody.h:4213
#52 0x00007ffff7fa343f in run() () at /mnt/hdd_home/Projects/IsolatedTorchTroubleshoot/src/main.cpp:34
#53 0x00005555555552b0 in main(int, char**) (argc=1, argv=0x7fffffffda08) at /mnt/hdd_home/Projects/IsolatedTorchTroubleshoot/src/frontend.cpp:31

In the working case, the amdgpu_cs_ctx_free() is also executed, but does not produce the error (Also looks like it gets the same arguments, but i only could check arguments up to #17 stack frame). Also tried to disable USE_PYDLPRIM in the library CMakeLists.txt but no luck 🙁