CodeReclaimers / neat-python

Python implementation of the NEAT neuroevolution algorithm
BSD 3-Clause "New" or "Revised" License
1.42k stars 495 forks source link

Add GPU support #116

Open cliftonarms opened 7 years ago

cliftonarms commented 7 years ago

It would be a real shame not to include the use of GPUs.

https://weeraman.com/put-that-gpu-to-good-use-with-python-e5a437168c01

Great code by the way.

drallensmith commented 7 years ago

If it could be done such that the library (as opposed to example) code would work both with the basic python libraries only and with numpy & related extensions when available, that could work. I have been examining CMA-ES (purecma.py) as a possibility for this, for instance.

-Allen

On Thu, Nov 2, 2017 at 7:16 AM, cliftonarms notifications@github.com wrote:

It would be a real shame not to include the use of GPUs.

https://weeraman.com/put-that-gpu-to-good-use-with-python-e5a437168c01

Great code by the way.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/CodeReclaimers/neat-python/issues/116, or mute the thread https://github.com/notifications/unsubscribe-auth/AZsckedyylyoQ6hJdQCqMl6TnSczjitiks5sybKSgaJpZM4QPl96 .

cliftonarms commented 7 years ago

Would be awesome if you could, as your Neat implementation would "fly" with a little matrix GPU assistance. May even outperform the SharpeNeat version.

evolvingfridge commented 7 years ago

I was looking at OpenCL to implement with neat, but eventually give up on that idea, since most problems at least in my case is not waiting for 3+ hours, but make sense of results. Additionally it fairly cheap to get lots of CPU's to speed up simulation, as example google offers $300 of computing power on there cloud, that's a lot of computing :) In what cases GPU speed is required for neat-python ?

cliftonarms commented 7 years ago

Speeding up with GPU is relatively simple if the cuda is being used. All it would need is a couple of the matrix intensive calculations being shipped to the gpu and training would be accelerated 10x.

A flag could be used to implement a gpu feed-forward script instead of a python one.

See here for a nice example https://weeraman.com/put-that-gpu-to-good-use-with-python-e5a437168c01

I am trying to use your neat out on stock prediction, 20,000 rows of 100 inputs. Even using parallel on 8 cores it takes 24 hours to run.

By the way, the code needs editing to run parallel on a windows machines. The input and output arrays need to be passed to eval_genome() as function parameters, and NOT globals. This is because windows spawns completely vanilla processes on each pool call so it is undesirable to load fresh input data on each new process.

mdalvi commented 7 years ago

As @drallensmith said, it could be done such that the code works with both, basic python libraries and with numpy and related extensions. One such extension that could serve is cudamat, for example, see here.

Additional references

mdalvi commented 7 years ago

@cliftonarms since you mentioned that you are using NEAT to beat stock market, this research paper might be useful for you, Designing Safe, Profitable Automated Stock Trading Agents Using Evolutionary Algorithms

This may not be related to the issue but just curious, what kind of network you are building takes 100 inputs? I am sure you must have taken care that your information is fairly uncorrelated.

cliftonarms commented 7 years ago

@mdalvi I'm not building a network, I am letting NEAT do that. ;o)

I am fed up with the diminishing gradient problems of deep neural nets. I have tried evolutionary strategies which remove the problem of diminishing gradients but still require the technician to select a network topology.

This is why I am trying the NEAT method, early days yet. Good thing about NEAT is it learns to dis-regards useless inputs so its more tolerant to lots of input variables.

Thanks for the article link.

The-Odor commented 4 months ago

Are there any thoughts on adding GPU support?