To fix https://github.com/JuliaGPU/CUDA.jl/issues/2191, where broadcasting a CuArray backed by unified memory results in a CuArray in device memory (i.e. the buffer type is lost), I want to make the CuArrayStyle broadcast style include the buffer type so that we can preserve it. That's currently impossible, as the style is constructed by GPUArrays, by doing the very cursed Adapt.parent(W){Adapt.eltype(W), Adapt.ndims(W)} constructor call (Adapt.parent currently returns an unordained typename), which both does not know about the additional buffer typevar, and calls a specific constructor without setting the buffer typevar.
Instead, in https://github.com/JuliaGPU/Adapt.jl/pull/75 I make it so that Adapt.parent (renamed to Adapt.parent_type to avoid confusion with Base.parent, which doesn't work on types) returns the full type, including the buffer typevar. Then, in this PR, I make it so that the back-end is not responsible for providing the BroadcastStyle methods, so that additional information can be put in there.
To fix https://github.com/JuliaGPU/CUDA.jl/issues/2191, where broadcasting a CuArray backed by unified memory results in a CuArray in device memory (i.e. the buffer type is lost), I want to make the
CuArrayStyle
broadcast style include the buffer type so that we can preserve it. That's currently impossible, as the style is constructed by GPUArrays, by doing the very cursedAdapt.parent(W){Adapt.eltype(W), Adapt.ndims(W)}
constructor call (Adapt.parent
currently returns an unordained typename), which both does not know about the additional buffer typevar, and calls a specific constructor without setting the buffer typevar.Instead, in https://github.com/JuliaGPU/Adapt.jl/pull/75 I make it so that
Adapt.parent
(renamed toAdapt.parent_type
to avoid confusion withBase.parent
, which doesn't work on types) returns the full type, including the buffer typevar. Then, in this PR, I make it so that the back-end is not responsible for providing theBroadcastStyle
methods, so that additional information can be put in there.Changes to back-ends should be minimal, see the JLArrays diff in here, or https://github.com/JuliaGPU/CUDA.jl/pull/2203. It is however a breaking change.
cc @vchuravy @jpsamaroo @pxl-th