@posch and I noted for the CUDA version of the benchmarks that large array sizes cause an integer overflow in the check whether the required amount of memory exceeds the GPUs global memory capacity:
On a x86_64 system with Linux (and GCC 8.3) an "large enough" array size of, e.g., 1024^3 causes an overflow when multiplied with three. The resulting (negative) value is then sign extended/promoted to (unsigned) size_t and compared with the memory capacity. The comparison shown above returns true even if the device in use has 40 GB or more of memory and could therefore provide the space for the three arrays.
This may affect not only the CUDA version of the benchmark.
Hi,
@posch and I noted for the CUDA version of the benchmarks that large array sizes cause an integer overflow in the check whether the required amount of memory exceeds the GPUs global memory capacity:
https://github.com/UoB-HPC/BabelStream/blob/e21134d53814147595aa2d96fcb94800b77a35dc/src/cuda/CUDAStream.cu#L53-L55
On a x86_64 system with Linux (and GCC 8.3) an "large enough" array size of, e.g., 1024^3 causes an overflow when multiplied with three. The resulting (negative) value is then sign extended/promoted to (unsigned)
size_t
and compared with the memory capacity. The comparison shown above returns true even if the device in use has 40 GB or more of memory and could therefore provide the space for the three arrays.This may affect not only the CUDA version of the benchmark.