Closed ranjanan closed 8 years ago
@tkelman I have incorporated those changes. Thanks.
@ViralBShah , if @tkelman doesn't have any more suggestions, can we merge this?
Is this good to merge? Does it need squashing commits?
I assume we need to tag a new version.
@vchuravy okay with you? Make any edits if anything looks off, otherwise someone with commit access should do the tag so we can push it to https://github.com/JuliaGPU/CUDA.jl/releases as well
@timholy As far as I know CUDA.jl
and CUDArt.jl
have a similar goal and CUDA.jl
was mostly abandoned until recently. Since you originally created CUDArt
could you shortly describe how it differs and if it makes sense to have both around.
Otherwise I would suppose that we keep CUDA.jl
, with a disclaimer that development moved on to CUDArt
.
Maybe we should retire this package then?
All the other CUDA packages in JuliaGPU e.g. CUBLAS... depend on CUDArt and this graph says everything: https://github.com/JuliaGPU/CUDA.jl/graphs/contributors vs https://github.com/JuliaGPU/CUDArt.jl/graphs/contributors
Yes it certainly looks like @ranjanan has been working on the wrong one then. If CUDArt is strictly better, this one should be deprecated so it doesn't lead to confusion and misdirected effort again.
I agree. I have seen occasional use of CUDA.jl in the wild too, and hence clear signalling from the JuliaGPU organization about deprecation and using CUDArt would be good.
Thanks for kicking this off, @vchuravy. It's been a while, but these are the main differences I remember:
rt
means "runtime," as in it's a wrapper of the runtime API. For many months we were in the situation that CUDA.jl
could not be used with runtime-based toolkits like CUFFT
(we got segfaults), and at the time the advice on the web was that you couldn't use the driver and runtime APIs together. (Well, you can call driver routines if you're using the runtime API, but the impression we all had was that you couldn't call runtime routines if you hadn't initialized resources through the runtime.) Eventually @moon6pence figured out how to get the two working together, but by that point CUDArt had advanced well beyond CUDA.fill!
, copy!
, and sleep
, support for arrays represented with PitchedPtrs
and HostArray
s (although the latter have problems spelled out in the README).On the flip side, I think most of @maleadt's work (and your extension of it) was based on CUDA.jl.
CUDArt also uses the driver API for module loading and kernel execution. Maybe we should strip CUDA.jl from its high-level features, pointing users towards CUDArt.jl instead (which supersedes CUDA.jl for most almost all use cases), yet still keeping the low-level functionality available in CUDA.jl for those who really need it (CUDArt.jl, and my Julia/NVPTX runtime).
@maleadt, that sounds like a fine plan.
@tkelman Could you review this so we can merge? Thanks.