JuliaGPU / Adapt.jl

Other
86 stars 24 forks source link

RFC: Establish concept of a computing device #52

Closed oschulz closed 1 year ago

oschulz commented 2 years ago

CUDA.jl, oneAPI.jl, etc. all provide a ...Device type, but without a common supertype.

Likewise, our GPU packages all provide functionality to get the device that a given array lives on, but each defines it's own function for it. The latter was partially addressed in (JuliaGPU/KernelAbstractions.jl#269) but it's not an elegant solution (all Requires-based) and KernelAbstractions is a heavy dependency. This makes it tricky to address issues like JuliaArrays/ElasticArrays.jl#44. Some of the code mentioned in JuliaGPU/GPUArrays.jl#409 could also lighten it's dependencies with common "get-device" functionality (LinearSolve.jl, for example, seems to need GPUArray.jl only for a b isa GPUArrays.AbstractGPUArray, similar for DiffEqSensitivity.jl

This PR and supporting PRs for CUDA.jl, AMDGPU.jl, oneAPI.jl and KernelAbstractions.jl attempt to establish a common supertype for computing devices, and support for

I think this will make it much easier to write generic device-independent code:

This PR defines AbstractComputingDevice, AbstractGPUDevice and implements GPUDevices. It's very little code, there should be no load-time impact.

CC @vchuravy, @maleadt, @jpsamaroo, @ChrisRackauckas, @findmyway

Status:

vchuravy commented 2 years ago

@tkf does this jive with what you need for Loops/Executor?

vchuravy commented 2 years ago

Maybe instead of ComputingDevice we call it ComputeUnit? We are moving towards heterogeneous system in general and they might not be separated devices

oschulz commented 2 years ago

Maybe instead of ComputingDevice we call it ComputeUnit? We are moving towards heterogeneous system in general and they might not be separated devices

Sure, absolutely!

Since this touches several PRs maybe I should wait for feedback on the general idea from @maleadt and @jpsamaroo before doing renames and so on?

oschulz commented 2 years ago

@maleadt and @jpsamaroo do you have some initial feedback? And sorry for the state that the AMDGPU part of this is still in @jpsamaroo, I'll need a few pointers on that. :-)

maleadt commented 2 years ago

This seems sensible to me, but I don't understand why it belongs in Adapt.jl. The only purpose of this package is to provide utilities to convert complex object structures, and is (in principle) unrelated to GPU/array programming. Now, if the proposed device identification were based on the existing Adapt.jl infrastructure (adapt_structure/adapt_storage, although it would probably need to be generalized) I could understand putting it here, but it currently isn't (i.e. echoing @tfk's comments, https://github.com/JuliaGPU/Adapt.jl/pull/52/files#r879031411).

codecov[bot] commented 2 years ago

Codecov Report

Merging #52 (76c686d) into master (d9f852a) will decrease coverage by 30.31%. The diff coverage is 0.00%.

@@             Coverage Diff             @@
##           master      #52       +/-   ##
===========================================
- Coverage   81.48%   51.16%   -30.32%     
===========================================
  Files           5        6        +1     
  Lines          54       86       +32     
===========================================
  Hits           44       44               
- Misses         10       42       +32     
Impacted Files Coverage Δ
src/Adapt.jl 100.00% <ø> (ø)
src/computedevs.jl 0.00% <0.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update d9f852a...76c686d. Read the comment docs.

oschulz commented 2 years ago

@maleadt: This seems sensible to me, but I don't understand why it belongs in Adapt.jl.

It seemed to be a natural place, both semantically and dependency-wise.

Semantically, adapt deals with moving storage between devices (at least that's what we use it for, mostly), so it seems natural to have a concept of what a computing device is in here.

Dependency-wise, pretty much all code that will need to define/use get_computing_device depends on Adapt already. We could create a super-lightweight package ComputingDevices.jl - but Adapt.jl would need to depend on it, so we can define adapt_storage(::CPUDevice, x). I'd be fine with that, but I thought people might prefer not to add such a package since Adapt.jl is already super-lightweight, and code that would need ComputingDevices.jl would very likely need Adapt.jl as well anyway.

Now, if the proposed device identification were based on the existing Adapt.jl infrastructure (adapt_structure/adapt_storage

With adapt_...(dev, x) (and I think that would be very good to have, see above) it's already integrated, in a way. And even if we only have a simple default implementation of get_computing_device that uses parent/buffer for now, it would certainly be good to expand in the direction that @tkf suggested ("generalizing Adapt.adapt_structure"). But I think that can be done as a second step (and would benefit Adapt as a whole).

We can always spin-off a separate package ComputingDevices.jl later on, if the need arises.

oschulz commented 2 years ago

some part of this could possibly go in GPUArrays, but having it in a more lightweight package instead is definitely appealing

Yes, one important motivation here is to enable generic code to be aware of devices without taking on a heavy dependency like GPUArrays.

oschulz commented 2 years ago

@vchuravy , @ChrisRackauckas , @tkf , @maleadt , @jpsamaroo thanks for all the inital feedback!

I've updated this PR and tried to include your suggestions - since a lot has changed I've marked the discussions above as resolved, but please do reopen them if issues haven't been addressed in the PR changes and the remarks below:

I think having an automatic recursive compute unit resolution will be important so that get_compute_unit_impl won't have to be specialized for most types - and we should support closures. This works:

julia> mllayer = let A = cu(rand(5,5)), b = cu(rand(5)), f = x -> x < 0 ? zero(x) : x
           x -> f.(A * b .+ b)
       end;

julia> x = cu(rand(5));

julia> get_compute_unit((mllayer, x))
CuDevice(0): NVIDIA GeForce

@tkf suggested a "fold over 'relevant data source objects' in a manner generalizing Adapt.adapt_structure". The generated default get_compute_unit_impl does such a fold, but since it doesn't need to reconstruct objects it's not limited to tuples and functions like the current Adapt.adapt_structure. It currently uses all fields - to my knowledge we don't have a widely supported way to get the "relevant data source objects" yet (that would be neat, though!). I suggest we pursue establishing such a standard separately (I suggested something along those lines in JuliaObjects/ConstructionBase.jl#54) and then use it in get_compute_unit_impl when available (even fewer types would need to specialize get_compute_unit_impl then).

@maleadt raised the question whether the compute unit concept belongs into Adapt. I strongly feel that it needs to be in a very lightweight package (so that generic code can be aware of compute devices without heavy deps, i.e. GPUArrays.jl is definitely too heavy for it). I would argue that Adapt is a good place, as adapt(::AbstractComputeUnit, x) is part of it. We could create a package ComputeUnits.jl, but Adapt.jl would have to depend on it. My suggestion would be to add the compute unit concept to Adapt.jl - we can still split off a ComputeUnits.jl in the future if necessary.

oschulz commented 2 years ago

(Closed by accident.)

oschulz commented 2 years ago

@ChrisRackauckas how do you think this should be integrated with ArrayInterfaceCore: AbstractDevice, AbstractCPU?

ChrisRackauckas commented 2 years ago

I'm not sure. @Tokazama might have opinions.

Tokazama commented 2 years ago

This seems a lot like what we do in ArrayInterfaceCore. We don't have anything that merges device types and we account for a difference in CPU types that are built on different memory objects like a tuple

oschulz commented 2 years ago

This seems a lot like what we do in ArrayInterfaceCore.

Is this already in active use somewhere? I think we should definitely merge this in one lightweight place (ArrayInterfaceCore or Adapt) so that we establish a ecosystem-wide concept of a computing device/node.

Tokazama commented 2 years ago

I think the loop vectorization stuff uses it a lot.

oschulz commented 2 years ago

I think the loop vectorization stuff uses it a lot.

Ok, so we can't just simply replace it - but I assume people will in general be in favor or merging this with the proposal in this PR (meaning adapt() and KernelAbstractions support, and having the GPU packages define the actual device types they are responsible for, to create clean and lean dependency trees)?

Tokazama commented 2 years ago

As you can see here, we could do a better job of handling the actual device types in terms of documentation and additional methods. But we have a pretty good system for deriving the device type here.

Tokazama commented 2 years ago

We have some nice tricks for avoiding generated methods using Static.jl too BTW. I haven't taken time to really dig in here and try it out, but it's worth thinking about.

oschulz commented 2 years ago

But we have a pretty good system for deriving the device type

Sure! Still, explicit support from the GPU packages would be nicer, right? And it's the only way that allows the user to handle systems with different GPUs cleanly, and also the only really clean way to bring KernelAbstractions into the mix as well.

We have some nice tricks for avoiding generated methods using Static.jl too BTW.

Nice! Could that be used to make the structural fold in this PR type stable without generated code (and how)?

oschulz commented 2 years ago

However we do this I think we should really aim to establish a single type tree for computing devices across the ecosystem, one that reaches down to different GPU (TPU, ...) types. And I think it would be very beneficial for users if they can write adapt(some_device, some_ml_model_or_other_big_thing).

So we'll need a depenency between Adapt and ArrayInterfaceCore - the question is, which way round? Package maintainers, over to you. :-)

Tokazama commented 2 years ago

Sure! Still, explicit support from the GPU packages would be nicer, right? And it's the only way that allows the user to handle systems with different GPUs cleanly, and also the only really clean way to bring KernelAbstractions into the mix as well.

Explicit support for specific GPU devices is something we definitely want. It just takes buy in from those packages, which up until now was hard to get because we had so much latency. We still need to work out all the bugs to update to Static.jl v0.7 (which fixes a bunch of invalidations), but we're already at a load time of ~0.06 seconds on Julia v1.7.

Nice! Could that be used to make the structural fold in this PR type stable without generated code (and how)?

From an initial look I'd say that StaticSymbol uses an internal generated function to merge Symbols. I'm pretty sure there's no other way to safely combine symbols that's also type stable.

Tokazama commented 2 years ago

However we do this I think we should really aim to establish a single type tree for computing devices across the ecosystem, one that reaches down to different GPU (TPU, ...) types. And I think it would be very beneficial for users if they can write adapt(some_device, some_ml_model_or_other_big_thing).

So we'll need a depenency between Adapt and ArrayInterfaceCore - the question is, which way round? Package maintainers, over to you. :-)

Those maintaining GPU packages will know better if some approach has quirks that will work with their internals, so I'm just trying to explain the advantages of what we've developed thus far. The main advantage is we work pretty hard to navigate type wrappers in the most robust way we can so that if you grab a pointer from something you also can figure out what indices/strides/etc are valid.

oschulz commented 2 years ago

The main advantage is we work pretty hard to navigate type wrappers in the most robust way we can so that if you grab a pointer from something you also can figure out what indices/strides/etc are valid.

Nice! I guess we'll have to combine that with the Adapt-style "structural descend", since we don't just want to cover arrays, but also deeply nested objects in general (MC and statistics model, complex data and so on)?

I guess the question now is will Adapt depend on ArrayInterfaceCore or the other way round. Both have a load time under 1 ms now, so from a user perspective it won't matter much, and both are pretty much unavoidable dependencies in any real project anyway. Depending on preferences of the Adapt and ArrayInterface maintainers I can then try to merge this PR with what's in AI, either here or there.

oschulz commented 2 years ago

@Tokazama : ArrayInterfaceCore. We don't have anything that merges device types and we account for a difference in CPU types that are built on different memory objects like a tuple

The equivalent to ArrayInterface.CPUTuple would be Adapt.ComputingDeviceIndependent in this PR (just by looking at, e.g., a StaticArray we can't know if we're currently on a CPU or GPU without additional context, and bitstypes don't really have to be "adapted" between computing devices).

oschulz commented 2 years ago

I'd like to get this moving again - @Tokazama can ArrayInterfaceCore.AbstractDevice evolve/change a bit as part of this effort?

Tokazama commented 2 years ago

I think the GPU side of things might be able to change without any breaking changes for us. We could probably justify some straightforward breaking changes if absolutely necessary if it unified the LoopVectorization and GPU ecosystems.

oschulz commented 2 years ago

Thanks @Tokazama . Ok I'll take a look - otherwise we can just provide a conversion between the device API in this PR and ArrayInterfaceCore.AbstractDevice.

mcabbott commented 1 year ago

Some of the proposed benefits are already available from the very lightweight GPUArraysCore.jl (which did not exist when the issue was opened). You can test x isa AbstractGPUArray, and call adapt of course.

I don't think you can "query total and available memory". But perhaps that could be something like GPUArraysCore.total_memory(::Type{<:Array}) = Sys.total_memory() with a method for the type CuArray?

One thing you can't do is CUDA.functional. Flux uses this so that calling model |> gpu does nothing if there's no GPU. Edit: I wonder how often that could be replaced with total_memory(CuArray) > 0?

Tokazama commented 1 year ago

It would be nice if we could have the GPU ecosystem and CPU/SIMD/LoopVectorization ecosystems share a common interface though.

oschulz commented 1 year ago

Also, this PR is not just about checking if an array is a GPU array, but about checking what kind of device(s) a whole data structure is located on. Plus the ability to use adapt to move data structures to a specific device. I'm very happy that we have GPUArraysCore.jl now, but I think we definitely need a common super-type for accelerator (and non-accelerator!) computing devices as well as for arrays. KernelAbstractions.jl, for example, doesn't need to define an extra device type for each supported backend anymore then.

As for pushing this forward: Maybe for now we can just work on fixing this up and merging this, and then look at possible connections to ArrayInterfaceCore.jl?

ToucheSir commented 1 year ago

Since the pending review comments appear to be about tree traversal, would it be possible to try landing a reduced version of this PR with just Adapt.adapt_storage(dev, x)? Maybe a constrained get_computing_device(x::Union{Array,AbstractGPUArray})::AbstractComputingDevice as a stretch goal. The former would be especially helpful on the Flux side.

oschulz commented 1 year ago

would it be possible to try landing a reduced version of this PR with just Adapt.adapt_storage(dev, x)?

Fine with me - we can do the tree traversal in a second step. Maintainers, what do you think?

maleadt commented 1 year ago

I'm still skeptical (this wasn't the scope of Adapt.jl and I feel like functionality is being tacked on because it's a common lightweight dependency), but if people are waiting for something like this I don't want to hold it up any longer 🙂 I guess it's fine to extend the scope to what the package is most commonly used for anyway. With respect to the actual functionality, we can always start with a getfield-based get_computing_device and try to figure out a better system to unify it with adapt's tree traversal later.

oschulz commented 1 year ago

this wasn't the scope of Adapt.jl

I do get that point, though maybe one justification would be that the current draft also specialized adapt() for devices?

With respect to the actual functionality, we can always start with a getfield-based get_computing_device

Would the current recursive get_compute_unit in this draft fall under "getfield-based", or should the recursion/traversal be removed?

maleadt commented 1 year ago

Would the current recursive get_compute_unit in this draft fall under "getfield-based", or should the recursion/traversal be removed?

It's getfield-based as it recurses based on the struct's fields, as opposed to the current "rule"-based approach that Adapt takes (with its plenty adapt_structure methods). But that's fine for now, if we even want to change it, because the API doesn't restrict it to be getfield-based.

@vchuravy any final thoughts?

Tokazama commented 1 year ago

I'm not sure if it makes a difference, but all of the device methods have been moved to ArrayInterfaceCore so if there are any reservations regarding integrating this because of start up latency, those should be mostly resolved.

oschulz commented 1 year ago

It would be nice not to have divergent APIs here for Adapt and ArrayInterfaceCore - but Adapt.adapt_storage(devive, obj) (very useful on the user side I believe) could only be implemented if Adapt would depend on ArrayInterfaceCore (or the other way round). Could I get a quick maintainer opinion on whether that would be acceptable?

maleadt commented 1 year ago

I see no reason not to; it's a dependency with minimal load time and no compatibility constraints.

Tokazama commented 1 year ago

Also, load time is likely to stay that way. Since we got rid of requires I've been testing load time for nearly every PR to ArrayInterfaceCore. Probably should work that into automated testing for each PR at some point though...

oschulz commented 1 year ago

@maleadt: I see no reason not to; it's a dependency with minimal load time and no compatibility constraints. @Tokazama: Also, load time is likely to stay that way.

Ok, great - I'll redraft this PR-complex then, including ArrayInterfaceCore. I'm on holiday, gimme a bit of time ..

Jutho commented 1 year ago

Bit late to the party, but if two packages (Adapt and ArrayInterfaceCore) would need this, and there is a discussion over which of the two should depend on the other, then why not have this functionality in a stand-alone package as was contemplated at the early stage of this PR?

oschulz commented 1 year ago

Hi,

sorry for the long silence, been a bit overloaded with other stuff, but this is most definitely still on my to-do list. In fact, I've recently come across some use cases that really need something like adapt(computing_device, object).

why not have this functionality in a stand-alone package

we could, but as I understand there's no objection to Adapt depending on ArrayInterfaceCore, and since both are already very lightweight it might be overkill to introduce a third package for this. We can still do so later on if we need to, after all.

Tokazama commented 1 year ago

Bit late to the party, but if two packages (Adapt and ArrayInterfaceCore) would need this, and there is a discussion over which of the two should depend on the other, then why not have this functionality in a stand-alone package as was contemplated at the early stage of this PR?

It's not so much that two packages need this as much that one package already has this partially implemented

Jutho commented 1 year ago

Ok, it was a question out of curiosity. Knowing what device an array is living on is a very basic primitive. For example, I have some custom implementations for DenseArray types, but these should only be used for host / CPU stuff, not GPU arrays. I am not particularly opposed to depending ArrayInterfaceCore, but there is also quite a bit of stuff in there that I definitely do not need and seems perhaps SciML / Linsolve.jl specific (lu_instance), and some things that I don't really understand from the documentation what it is about and is probably matrix specific (matrix_colors?)

Certainly, at the moment, ArrayInterfaceCore.device(::CuArray) is producing the wrong result, so I will anyway go with depending on GPUArraysCore and using the AbstractGPUArray type.

Tokazama commented 1 year ago

I'm not saying it can't be a third package. I just want to be clear that there's a bit more work there than one might assume because we need to avoid disrupting things already in production

oschulz commented 1 year ago

Just a live sign, I haven't forgotten about this one. Due to recent changes in the interdependence between KernelAbstractions and the GPU packages it will have to evolve a bit though, I think. I'll play around with a few ideas.

vchuravy commented 1 year ago

@oschulz also thank you for all the thinking about this. My long-term plan is to add KA as a dependency to GPUArrays, and anchor the notion of the compute device in KernelAbstractions e.g. as the backend.

I also took some inspiration and added an allocate function.

oschulz commented 1 year ago

My long-term plan is to add KA as a dependency to GPUArrays, and anchor the notion of the compute device in KernelAbstractions e.g. as the backend.

Thanks, that sounds great!

oschulz commented 1 year ago

I'll explore the concepts in here in a standalone package HeterogeneousComputing.jl for a while. I have a current need to use this kind of functionality in two or three real-life applications (so I'll register HeterogeneousComputing), and at least one of them will involve determining if data is on disk (DiskArrays/HDF) in addition to distinguishing just between CPU and GPU types. So I think I'll need to experiment and iterate this a bit more, before proposing a new incarnation of this in packages as central as Adapt and ArrayInterfaceCore.

Experimentation will be easier in a package that can go through versions faster without affecting half the ecosystem. :-) And with Julia v1.9 weakdeps is easier now without a huge Requires load-time hit.

I'll close this and open a "reminder issue" - I still believe it would be great to have common supertypes for compute units, adapt integration for them and a central way of determining what (nested) data is on which device.