rust-ndarray / ndarray

ndarray: an N-dimensional array with array views, multidimensional slicing, and efficient operations
https://docs.rs/ndarray/
Apache License 2.0
3.43k stars 295 forks source link

GPU Support via OpenCL #1377

Open Pencilcaseman opened 3 months ago

Pencilcaseman commented 3 months ago

This is very much a work-in-progress, but I wanted to know how this approach fits with the rest of the codebase. I realise it's not the most "rusty" implementation, but I think it could be abstracted away quite nicely.

There is a fair amount of unnecessary/unclean code, but that can be cleaned up pretty quickly. The changes made here only allow for binary operations (+ - * /) on contiguous ArrayBase references (see the example below). With more work, it could be expanded to support almost everything the CPU "backend" supports.

To enable OpenCL support, you need to enable the opencl feature.

Here is an example:

// Unfortunately, OpenCL requires some initialisation before it can be used.
// There are currently no checks on this, but they can be easily added.
ndarray::configure();

// Note that the result of `move_to_device` is a `Result<_, OpenCLErrorCode>`, so errors
// can be handled correctly
let x = ndarray::Array2::<f32>::from_shape_fn((3, 4), |(r, c)| (c + r * 4) as f32)
    .move_to_device(ndarray::Device::OpenCL)
    .unwrap_or_else(|| panic!("Something went wrong"));

let y = ndarray::Array2::<f32>::from_shape_fn((3, 4), |(r, c)| 12.0 - (c + r * 4) as f32)
    .move_to_device(ndarray::Device::OpenCL)
    .unwrap_or_else(|| panic!("Something went wrong"));

// Only this form of binary operation is supported currently.
// i.e. reference <op> reference
// This operation takes place in a JIT-compiled kernel on the GPU
let z = &x + &y;

// You can only print something if it's in Host memory. This could be changed
// to automatically copy/move to host if necessary
println!(
    "Result:\n{:>2}",
    z.move_to_device(ndarray::Device::Host).unwrap()
);

// [[12, 12, 12, 12],
//  [12, 12, 12, 12],
//  [12, 12, 12, 12]]

A very similar approach can be taken to get CUDA support working. It might even be possible to merge them into a single GPU backend trait, for example, which would simplify the implementations quite a bit. It'd require a few substantial changes internally, though (I think. Maybe not?).

Anyway, let me know what you think!

bluss commented 3 months ago

That is interesting. Thanks for the neat proof of concept.

Now I don't have bandwidth to look at this in depth unfortunately. Look at the repo, we're working on version 0.16 which is only three years late or so. :) Brief and off the cuff for that reason. I'll ask some questions, that doesn't mean I'm making requirements or demands, they are things to think about.

Why not have a separate owned type for arrays that are on a non-cpu device? Users probably don't want a situation where they can't tell if adding two arrays together is going to cause a crash or not.

How do array views work in this model? Most interesting functionality in ndarray happens through array views.

I do agree - if you think so - that too much type information is not good either - it just gets in the way, the hard part is finding a balance.

Pencilcaseman commented 3 months ago

Thanks for the review @bluss. I didn't check the validity of stuff particularly rigorously, so I'm sure there are a few potential issues.

I've had a bit of a think, and I reckon it'd be possible to kill a few birds with one (admittedly quite large) stone.

I've read through most of the open issues, and it seems like a lot of them relate to the maintainability of the library (I'd be keen to help out with this if you're still looking for someone to maintain things) and cleaning up or refactoring the code.

If there were a Backend trait which exposed things like OwnedRepr creation (each backend will have its own OwnedRepr type), contiguous operations, strided operations, linear algebra operations, etc., for a given device, then it would be possible to abstract the calculations/operations away from the ArrayBase struct and into those backends.

The backends could have reasonably generic code that uses templates, cutting down on the heavy macro expansions and making the code easier to maintain. The Array and View types would need to take the backend as a generic type, but that should allow any backend to have full View functionality since it'd all be handled by the backend provider.

Obviously, things like custom mapping functions and hand-written iterators won't work on anything other than the CPU since the data isn't available in host memory, but everything else should work fine.

That has the added benefit of allowing anyone to write their own backend for ndarray for whatever use case they have instead of weaving the functionality into the existing codebase.

Anyway, maybe I'm getting a bit ahead of myself, since that'd be a fairly substantial refactor of the codebase. If that's something you think would be beneficial, then I'd be happy to start looking into it. As I mentioned earlier, I'd also be happy to help maintain the project if you need another maintainer :)

Thanks again for the review!