Open Ns85 opened 7 years ago
Is CTRNN a LSTM network ?
There expected some changes in near future that would allow to setup networks manually as of set static nodes/connections and mutable space, I am not sure about timing yet, when such config will be implemented. Take a look at https://github.com/CodeReclaimers/neat-python/issues/102 for discussion about config.
Hey,
LSTM's could probably also be used with CTRNN.
It is a "beefed" up version of a normal recurrent neural network. This blog has some nice illustrations: http://colah.github.io/posts/2015-08-Understanding-LSTMs/
According to this paper by Rawal & Miikulainen, an evolved LSTM recurrent network has better memory properties than a normal recurrent network. http://www.cs.utexas.edu/users/ai-lab/downloadPublication.php?filename=http://nn.cs.utexas.edu/downloads/papers/rawal.gecco2016.pdf&pubid=127570
Interesting, I need to look more into neural networks, I spend so much time on evolution/mutation never got a chance to looks into details of neurons. Personally I am not aware of any plans for implementing LSTM as of now.
I assume reset of different time-steps is attribute of a LTSM node. What do you mean by two or more recurrent nets, is it like evolving modularity in neat network ?
My bad; with "reset", i should have wrote that it was the state of the network that i wanted to reset. Just like you have it implemented in the nn.recurrent.RecurrentNetwork. If for example you have recurrent networks steering a car, it would be nice to have one you could reset when the road surface changed, and one that never got reset. If that makes sense?
The two networks could each have several modules evolving, just like in neat. But they should be separate, so that you could concat / fuse their output into a decision algorithm, or a third network.
Don't give me credit for implementing reccurent.py, all credits go to @CodeReclaimers
It does makes sense, did you look at https://github.com/CodeReclaimers/neat-python/issues/104 there is interesting link about evolving multiple modules posted by @drallensmith.
As I understand from paper you linked neat-ltsm is always fully connected network, only parameters that are evolved are read, write and forget. - correct ?
Weird why @CodeReclaimers is not highlighted in above post - a bug ?
Let me correct it: CodeReclaimer & contributors^^
The few papers i've read evolved the weight / bias parameters of the network and the connections within the modules, but not the architecture.
One question that I've been thinking about with a distinct relevance to LSTM is how many times to feed a set of inputs to a non-continuous recurrent network before reading the outputs. For a feedforward network, the input goes through the entire set of layers before the output is read. I'm thinking this might be something for a genome attribute (an IntAttribute as with @bennr01's hyperneat branch) to determine. (I also need to try to look up any prior research on this... haven't quite gotten around to it, as I am not focusing on recurrent networks at the moment.)
Re rawal.gecco2016.pdf - Amusing; I just referenced that paper over on another (#105) issue discussion.
Setting up a static LSTM configuration would be one way to do it; otherwise, it would need to be a type of node with multiple distinct inputs - something that will be needed anyway for RBF nodes. One could probably reduce the number of inputs to A. data and B. forget old vs ignore new - as in, the second input would be a combined forget+write trigger (ordering of wipe and load new from "data" would need to be correct, of course!). I don't see any need for an output gate, which seems more needed for more-static networks using LSTMs.
BTW - if just doing weights and other numeric attributes, CMA-ES may be helpful; that link includes a pure-python no-numpy version in cms/purecma.py. I suspect I will wind up using it for RBF nodes given all the potential parameters... The addition of constraints (perhaps using the method from this paper) to purecma.py would be necessary, incidentally.
Thanks, i'll have to check that out:)
I have been lurking a bit on ES. I know they can perform well on MDP's, but i couldn't seem to figure out how it fared on POMDP's.
Re rawal.gecco2016. lol, yeah.
But then again, with the amount of awesome stuff Miikkulainen has put out, it probably isn't that odd of an occurrence
@drallensmith ,@Ns85, @evolvingfridge , I am still at a loss--- whether the implemented CTRNN is an extension of RNN? I mean rnn structure (usually lstm) would sometimes be used to solve reinforcement learning, would CTRNN have similar effect with NEAT? From my opinion, when at least one of network outputs is fed as input for next timestep, the network is an RNN, am I right?
Hey,
First, thank you guys for making the library. I've been playing around with it a little over a month now, and it is really addicting:)
Two questions:
Do you have any plans for implementing LSTM's (maybe in a static form, where it is just the weights that gets evolved)?
Also, is there any plans for implementing modules for each genome. Such that a genome can have two or more recerrunt nets, that gets reset on different timesteps?