Closed maxrodkin closed 3 years ago
I'm sorry I haven't any experience on using GPU for speeding up genetic algorithms, or using GPU in general. Before taking this route, I would consider to improve the fitness function, for example by rewriting it in C++ and using the fantastic Rcpp package.
thank you , Luca. Ye, its seems complicated. Rarher i
ll use the snow package with docker cluster, it`s compatible with GA. But your idea is great, have thinking about.
I have working on portfolio optimisation problem. I've been using the GA package successfully. The lenght of genome is about 40 , the type - real-valued. The fitness function is anough complice and run about 0.5-1 sec for one step. The whole script runs about 5-10 minutes. I used the GA "parallel" parametr and "snow" package to speed up the script running. It`s working methods.
Now I want to speed up it with GPU. What approach should be used? OpenCL , gpuR to re-write GA methods to GPU-based, or is there a simpler design that exists?
I am using Ubuntu 18.04, CUDA 10.0, Jupyter and R.