AccelerateHS / accelerate

Embedded language for high-performance array computations
https://www.acceleratehs.org
Other
893 stars 117 forks source link

[BUG] Internal error in package accelerate and LLVM.PTX backend: CUDA Exception - misaligned address #529

Open sergiodguezc opened 1 year ago

sergiodguezc commented 1 year ago

Description I encountered an internal error in the accelerate package while running my code. The error message was:

Internal error in package accelerate
Please submit a bug report at https://github.com/AccelerateHS/accelerate/issues

CUDA Exception: misaligned address

CallStack (from HasCallStack):
  internalError: Data.Array.Accelerate.LLVM.PTX.State:53:9

Steps to reproduce Currently, I don't have an example to reproduce the error, but I am trying my best to get one. I will attach it to this issue as soon as I have it.

Expected behaviour I expected the code to run without any errors.

Environment

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA GeForce GTX 760 (192-bit)" CUDA Driver Version / Runtime Version 11.4 / 10.2 CUDA Capability Major/Minor version number: 3.0 Total amount of global memory: 1482 MBytes (1554251776 bytes) ( 6) Multiprocessors, (192) CUDA Cores/MP: 1152 CUDA Cores GPU Max Clock rate: 889 MHz (0.89 GHz) Memory Clock rate: 2800 Mhz Memory Bus Width: 192-bit L2 Cache Size: 393216 bytes Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096) Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 1 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled Device supports Unified Addressing (UVA): Yes Device supports Compute Preemption: No Supports Cooperative Kernel Launch: No Supports MultiDevice Co-op Kernel Launch: No Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.4, CUDA Runtime Version = 10.2, NumDevs = 1 Result = PASS


 - GHC: `8.10.7`
 - OS: `Arch Linux 6.1.22-1`

**Additional context**
Note that when using the LLVM.Native backend, the code runs without any errors. Please let me know if you need any additional information or if there's anything else I can do to help diagnose and fix this issue. Thank you.
ivogabe commented 1 year ago

We've recently also seen this in a larger project, but we couldn't create a small reproduction from that (yet). It would be really useful if you could create a small reproduction! That would make it a lot easier to debug.