Open Evizero opened 7 years ago
I'd be a supporter of such kinds of differentiation. As you surely know, "resources" are pretty thin wrappers, so adding a bazillion of them doesn't seem like a problem, especially when there's a genuine need like in this case.
One pitfall that came to mind is if there should be some kind of "superset" logic in such a case. For example if someone declares addresource(CUDADev)
, does (s)he still have to declare addresource(CUDAlib)
?
Would the algorithm be shared in any sense? Or would they be executing completely different code?
In my understanding this was more about declaring capabilities. I didn't really consider shared dispatch. For that purpose ::Union{CUDADev,CUDALib}
seems easy enough (?) I am lacking data on this. I'll play around with the idea
Well, the main issue is that the algorithm dispatches on the resource,
myalg(::CPU1, args...) = 1
myalg(::CPUThreads, args...) = 2
...
@Evizero I'm interested in this feature as well. Have you got around it eventually?
not so far, no
@navidcy, it's only a couple of lines of code to add a new resource, so if you are ready to use such a feature I'd recommend just submitting a PR.
AFAICT CUDAnative no longer requires a source-build of Julia, so life just keeps getting better :smile:.
One discussion that might be worth having is to differentiate between
CUDAnative
, which requires the user to have a self-compiled julia version, or*.ptx
files usingCUDAdrv
.maybe
CUDADev
vsCUDALibs
. ? Whats interesting aboutCUDAnative
is that one can emit and store the producedptx
output, like the following example shows (taken verbatim from https://github.com/JuliaGPU/CUDAnative.jl/blob/master/examples/reduce/benchmark.jl#L16-L22)This means that if one is clever about it, it is possible to ship with pre-compiled
ptx
kernels to reduce user requirements, while as developers still write everything usingCUDAnative
Naturally this is only part of the puzzle, but we gotta start the discussion somewhere :)