wagenaartje / neataptic

:rocket: Blazing fast neuro-evolution & backpropagation for the browser and Node.js
https://wagenaartje.github.io/neataptic/
Other
1.19k stars 279 forks source link

Suggestions #12

Closed wagenaartje closed 7 years ago

wagenaartje commented 7 years ago

Hi. This thread is meant to show you what i'm working on at the moment. At the same time, it is a place to make your suggestions on what to add/improve to Neataptic.

What I've done

What i'm looking into / working on

I can't guarantee that items on this list will be implemented!

wagenaartje commented 7 years ago

Something i'd like to overhaul actually: I'd like to make it so that neural networks are modular, example:

var model = new Network();
model.add(LSTM(5, Activation.RELU));
model.add(NARX(10, Activation.TANH));
model.add(Dense(10, Activation.ABSOLUTE));

model.compile();
model.train(...

So it basically starts to look a little bit like Keras. I think I can implement this quite fast, but I just want some opinion on it.

I made a seperate issue for this: https://github.com/wagenaartje/neataptic/issues/15

dan-ryan commented 7 years ago

Have you looked at HyperNEAT?

wagenaartje commented 7 years ago

@zanmbi I just took a more thorough look and I think it's pretty interesting. I don't understand it 100%, but from what I understand it does require a little bit of an overhaul on how I have implemented the GA at the moment. I might implement it in the future, but for now I'll focus on remodelling network creation.

dan-ryan commented 7 years ago

Yeah I was researching on Neat and found this video: https://youtu.be/t15wUkCXuxQ which explained HyperNeat in a nice way. This weekend I'll play around with Neat, Ill see if I can find more suggestions.

dan-ryan commented 7 years ago

How about some multi-threading support?

wagenaartje commented 7 years ago

I have been thinking about multi-threading for a while now. I think the way to go is using webworkers, as I don't think the library as it is right now won't benefit a lot from GPU support because Neataptic how it is right now is very object-oriented (instead of matrices), but i'm doing research on it.

But it's something that is definitely going to come in the future. It's quite a pain to be running a GA for hours knowing that it could be run much faster.

wagenaartje commented 7 years ago

Coming soon: scheduling functions during training/evolution. Also writing an article on text prediction (letting a neural network write some text!)

wagenaartje commented 7 years ago

I saw a question about regularization, it got removed. In what kind of regularization are people interested? Ive started work on dropout.

wagenaartje commented 7 years ago

Text prediction example - I'll post it somewhere, however it won't be very extensive as my CPU is not powerful enough to run the algorithm on a whole text (it will take about 100 hours of computation time).

I'm looking into GPU and Multithreading support. Multithreading will only be useful for evolution, as backpropagation is sequential. So my eyes are on GPU now.

dugagjin commented 7 years ago

For regularization:

I think that the mean of the sum of squares of the network weights and biases is a good start since it should be the easiest to implement: MathWorks

Btw, thanks for the good work that you have done! It's an amazing framework to play with. :)

wagenaartje commented 7 years ago

Looks interesting. That is L2 regularization right? I understand how it works, I just don't exactly understand how to implement it correctly. Should I implement it alongside the connection weight change, or implement it somewhere else? I'm looking into it.

From what i'm seeing, I should implement it like this (line):

// Old
connection.weight += rate * gradient;

// New
connection.weight = -( 1 - (rate * lambda) / trainingSize) + rate * gradient;

BTW, regularization will come soon. I have implemented Dropout and Weight decay, however I don't have sufficient datasets to test this on.

Glad to hear you like it!

wagenaartje commented 7 years ago

First regularization has been implemented: Dropout. This is an experimental training option. If I don't encounter any bug reports after 2 weeks on dropout i'll start implementing weight decay.

dugagjin commented 7 years ago

L1 is when you take the absolute value and L2 is when you square it. So it is L2. :smile:

The way that you proposed it, is an elegant implementation in my opinion.

Normally if lambda is set to 0 then the L2 regularization will fall off and it will become a classic mean sum of squares of the network errors.

I am not familiar with your repository (the code) and I am a noob, so I can't tell with 100% if your suggested line mathematically respects if lambda = 0 then regularization = 0. :cry:

wagenaartje commented 7 years ago

Ahah, thought so 😋

If lambda is 0, regularization should be 0. I'll be doing some testing later this week with both L1 and L2.

The only question I still have: is my above implementation L1 or L2? Nowhere do I see a square, or should it be on some other line of code. I understand that you maybe can't answer this question because you're unfamiliar with the code🌚 . But i'm doing some research so I should figure it out soon.

dan-ryan commented 7 years ago

Does regularization work for Neat?

wagenaartje commented 7 years ago

As regularization is basically keeping the amount of nodes and the sum of the weights and biases minimal for the dataset, it can easily be implemented in the fitness function:

function fitnessFunction(genome){
  var score = -genome.test(dataSet);

  // Limit the amount of nodes
  score -= genome.nodes.length * multiplier1;

  // Limit the size of weights
  for(var i = 0; i < genome.connections.length; i++){
    score -= Math.abs(genome.connections[i].weight) * multiplier2;
  }

  return score;
}

You can change the multipliers to any value that works for the given dataset. But I can't really make a built-in option for this, as the multipliers are different for every dataset.

D-Nice commented 7 years ago

Excellent work! Having GPU or multithreading support would be great as well, as the performance is the only thing lacking (understandably) compared to other mature machine learning libraries. Otherwise I definitely must say it's the best ML library for node. Keep it up!

wagenaartje commented 7 years ago

Just a quick update: i'm currently renewing the website and moving the wiki there with mkdocs. This will be done soon and I will focus on the development again.

wagenaartje commented 7 years ago

I have started working on multi-threading - I got it to work, and I made it so that the fitnesses of the entire population in the neat algorithm run parallel: but it's not faster at all. And it seems like it isn't something I can fix...

wagenaartje commented 7 years ago

Closing - suggestions can now have individual threads.