Closed ogrisel closed 1 month ago
This is a good idea. My original idea was to use cupy as a backend #5, but that requires you to have access to a CUDA GPU.
This is a good idea. My original idea was to use cupy as a backend https://github.com/data-apis/array-api-strict/issues/5, but that requires you to have access to a CUDA GPU.
Indeed, I saw #5 and was wondering if it was still considered valid. I am not about the value and maintenance complexity of having a multi-backend array-api-strict
.
As you said, the fact that cupy requires a working CUDA is restricting the pool of people who could run it on their day-to-day developer environment.
Well this idea definitely achieves the original purpose of having a CuPy backend in a much simpler and more general way. I'm not sure if there are any GPU-specific idiosyncrasies that we might want to support which would be difficult to emulate without actually using a library like CuPy.
Motivation: in scikit-learn, we run array API compliance tests both with
array_api_strict
andarray_api_compat.torch
. For the latter we run tests both withcuda
,mps
andcpu
devices.Testing with torch is important because it helps us reveal occurrences of problems related to device handling. For example let's consider the following problematic function:
Calling this with:
raises a
RuntimeError
because PyTorch does not implicitly move data across devices. Hence the bug should be fixed by changing the function to:However, not all scikit-learn contributors have access to a machine with a non-cpu device (e.g. "mps" or "cuda") and therefore they have no easy way to detect this family of bugs by running their tests on their local laptop and only discover issues on the CI and need to use tools such as google colab to debug and refine their code instead of their regular dev environment.
To reduce this friction, it would be nice if
array-api-strict
could accept to create arrays witha = xp.asarray([1, 2, 3], device="virtual_device_a")
andb = xp.asarray([1, 2, 3], device="virtual_device_b")
and would raiseRuntimeError
on operations that combine arrays from different devices as pytorch does.