Open cliftonarms opened 7 years ago
If it could be done such that the library (as opposed to example) code would work both with the basic python libraries only and with numpy & related extensions when available, that could work. I have been examining CMA-ES (purecma.py) as a possibility for this, for instance.
-Allen
On Thu, Nov 2, 2017 at 7:16 AM, cliftonarms notifications@github.com wrote:
It would be a real shame not to include the use of GPUs.
https://weeraman.com/put-that-gpu-to-good-use-with-python-e5a437168c01
Great code by the way.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/CodeReclaimers/neat-python/issues/116, or mute the thread https://github.com/notifications/unsubscribe-auth/AZsckedyylyoQ6hJdQCqMl6TnSczjitiks5sybKSgaJpZM4QPl96 .
Would be awesome if you could, as your Neat implementation would "fly" with a little matrix GPU assistance. May even outperform the SharpeNeat version.
I was looking at OpenCL to implement with neat, but eventually give up on that idea, since most problems at least in my case is not waiting for 3+ hours, but make sense of results. Additionally it fairly cheap to get lots of CPU's to speed up simulation, as example google offers $300 of computing power on there cloud, that's a lot of computing :) In what cases GPU speed is required for neat-python ?
Speeding up with GPU is relatively simple if the cuda is being used. All it would need is a couple of the matrix intensive calculations being shipped to the gpu and training would be accelerated 10x.
A flag could be used to implement a gpu feed-forward script instead of a python one.
See here for a nice example https://weeraman.com/put-that-gpu-to-good-use-with-python-e5a437168c01
I am trying to use your neat out on stock prediction, 20,000 rows of 100 inputs. Even using parallel on 8 cores it takes 24 hours to run.
By the way, the code needs editing to run parallel on a windows machines. The input and output arrays need to be passed to eval_genome() as function parameters, and NOT globals. This is because windows spawns completely vanilla processes on each pool call so it is undesirable to load fresh input data on each new process.
@cliftonarms since you mentioned that you are using NEAT to beat stock market, this research paper might be useful for you, Designing Safe, Profitable Automated Stock Trading Agents Using Evolutionary Algorithms
This may not be related to the issue but just curious, what kind of network you are building takes 100 inputs? I am sure you must have taken care that your information is fairly uncorrelated.
@mdalvi I'm not building a network, I am letting NEAT do that. ;o)
I am fed up with the diminishing gradient problems of deep neural nets. I have tried evolutionary strategies which remove the problem of diminishing gradients but still require the technician to select a network topology.
This is why I am trying the NEAT method, early days yet. Good thing about NEAT is it learns to dis-regards useless inputs so its more tolerant to lots of input variables.
Thanks for the article link.
Are there any thoughts on adding GPU support?
It would be a real shame not to include the use of GPUs.
https://weeraman.com/put-that-gpu-to-good-use-with-python-e5a437168c01
Great code by the way.