Closed sbalint98 closed 3 years ago
@sbalint98 Again, thanks for creating the issue. I will test this on my side and get back to you.
@sbalint98 Here are my observations:
These observations tells me that, the issue is not Cuda backend specific.
Thank you for investigating. I find these results quite surprising I thought ctest is just a wrapper around gtest in this case. However I can confirm that using ctest works fine. Closing now.
Summary
When running the unit tests on a Cuda device the tests fail since the GPU runs out of memory.
I am trying to run the tests on a gtx1080Ti with 11178MiB of global memory, but after executing the first few tests, a runtime exception is thrown because of insufficient device memory (
CUDA_ERROR_OUT_OF_MEMORY
) (see log below)Version
The current oneMKL develop head is used eg: 1ed12c7
Environment
buildbot/configure.py --cuda
and buildbot/compile.py
Steps to reproduce
Let the cuda-enabled dpc++ be installed in:
<cuda-DPC++-dir>
configure, build oneMKL:Observed behavior
After the first few tests, all GPU test fail because of
CUDA_ERROR_OUT_OF_MEMORY
. Checking nvidia-smi while running the tests confirms that the allocated memory is continuously increasing over time. Possible memory leak? cuda_test_out.logExpected behavior
GPU tests shouldn't fail because of a lack of device memory