JuliaParallel / DistributedArrays.jl

Distributed Arrays in Julia
Other
197 stars 35 forks source link

Make broadcast implementation work with chunktype #164

Closed vchuravy closed 6 years ago

vchuravy commented 6 years ago

This PR has two goals. First and foremost it makes broadcast generally work for DArrays

B = DArray((400, 400)) do I
        m, n = map(length, I)
        reshape(rand(Float32, m*n), m, n)
    end;
B .+ B .* B

didn't work before. Secondly it tries to be aware of the fact that we can have an underlying ArrayType that is not Array and which provides its own Broadcast implementation and customisation.

A = DArray((400, 400)) do I
        m, n = map(length, I)
        reshape(CuArrays.CURAND.curand(Float32, m*n), m, n)
    end;
A .+ A .* A

Currently the following didn't work yet:

A .+ sin.(A)

because we apparently short-circuit the broadcast customisation of CuArray that does the function translation.

cc: @maleadt

vchuravy commented 6 years ago

Breadcrumb for myself. In the CuArray case we are calling a gpu kernel with DArray{CuArray}... So we might want to contrain ourselves to situations where the localparts match up...

vchuravy commented 6 years ago

I will merge this for now. Test for CuArrays and DArrays live at https://github.com/vchuravy/Heterogeneous.jl and the (minimal) tests passed successfully https://gitlab.com/JuliaGPU/Heterogeneous.jl/-/jobs/99040385. I will squash and merge this.