Closed Eure-L closed 2 years ago
Thanks for pointing this out @Eure-L . Indeed, some recent updates in CUDA.jl require now to explicitly convert the GPU array for broadcasting. This will work:
T_nohalo .= Array(T[2:end-1,2:end-1,2:end-1]);
Thanks for reporting, and we will update the examples for the next release.
Hi! First I'll thank you all for this amazing module that I just discovered and enjoy so much. I am very new to Julia and started to play around with some easy GPU computing that offers parallel_stencil.jl, but came across error when running one of the examples provided (examples/diffusion3D_multigpucpu_hidecomm.jl):
where T would be a CUDA Array (selected by parallel_stencil) and T_nohallo a "standard" Array
Causing the following Error :
So from my basic understanding it would appear that parallel_stencil doesn't allow interoperability between CUDA Arrays and Standard ones for broadcasting, is it no longer supported? Sorry in advance if this is a dumb issue, I have yet to find a workaround.