Closed lukehutch closed 4 years ago
My guess is quite a lot of work is needed but I have no experience with KD trees on GPUs so I might be wrong.
This post might be helpful:
Don't see a reason to keep this open since there isn't really any actionable here.
I'm new to Julia, and I'm trying it out because it seems to be the simplest way to program GPUs right now using
CuArray
. I need a fast kd-tree algorithm that runs on a GPU, among other things, and I foundNearestNeighbors.jl
. How much work would be involved in getting this code working out ofCuArray
instances rather than its current CPU-bound data structures?