MilesCranmer / SymbolicRegression.jl

Distributed High-Performance Symbolic Regression in Julia
https://astroautomata.com/SymbolicRegression.jl/dev/
Apache License 2.0
564 stars 68 forks source link

[Feature] GPU acceleration #111

Open Forbu opened 2 years ago

Forbu commented 2 years ago

First thanks you Miles Cranmer for creating this amazing lib ^^. My down to earth question is : do you think it is possible to use GPU to accelerate the symbolic regression ? Will it be interesting in the first place to accelerate the computation ?

MilesCranmer commented 2 years ago

I tried this at one point! The attempt is on the cuda branch, which actually runs. However, it turns out that (at least my attempt of it) was actually quite slow. The reason for this is that typical datasets are so small, it is most important to have the expression evaluation (CPU, but also potentially GPU) and tree mutations (CPU-only) as tightly linked as possible. The time it takes to call different kernels on the GPU bottlenecked the search.

However, maybe it is possible to speed it up with some extra tinkering of the expression evaluation. You would want to pass CuArray types through the search using CUDA.jl: https://github.com/JuliaGPU/CUDA.jl.

dbl001 commented 1 year ago

What about with Metal.jl?

https://github.com/JuliaGPU/Metal.jl

MilesCranmer commented 1 year ago

Metal.jl is just for macOS right? Perhaps it could be done as the evaluation part is still the slowest, not sure. You would basically need to write a kernel for the following recursive function: https://github.com/MilesCranmer/SymbolicRegression.jl/blob/3ea5aadbdeaa2e936e6117373c625e5c5a947daa/src/EvaluateEquation.jl#L38-L44 (pseudocode; the actual implementation is a bit more complex, to get speed)