cazala / synaptic

architecture-free neural network library for node.js and the browser
http://caza.la/synaptic
Other
6.92k stars 665 forks source link

How do I set the output value range? #62

Open agamm opened 9 years ago

agamm commented 9 years ago

Lets say I want to train a network like so: var sinNetwork = new Perceptron(1, 12, 1);

And it is trained to find the sin function - which outputs values between -1 and 1. How can I get the network to output between that range? is that possible? currently it outputs only between one and zero.

I know that I can transform the number but it feels messy (ie => train: (Math.sin(x)+1)/2, convert back: out * 2 - 1).

In addition to that, what if I want to change the squash function to be a linear output unit? (e.g a real number) I found that I couldn't access those properties directly from the sinNetwork object.

menduz commented 9 years ago

Mmm i don't think thats the way a neural network works.. discrete or real numbers arent the fuel of neural nets. They use derivatives, gains and squashed values. They acts as classifiers, if you want pick a value i think your best choice is another kind of classifier network(Hopfield) or algorithm attached to the output of your main net. Even if you want to apply a mathematical function to the activation result, the values would never be equally distributed in the desired interval, is very hard to obtain values closer to 0 or 1 most are in between.

In this case you can try train a network to get the quadrant of a circle of a given angle, with 4 outputs. and the input must be normalized, if you feed the network with values in the range [0, 360] / 360 i think you gonna success. but if you train it for those values, when you feed it with 720 / 360 the net will be confused.

PS: rethinking, it might work. Its just a identity function.

Saludos!!

On Thu, Oct 8, 2015, 21:59 Agam notifications@github.com wrote:

Lets say I want to train a network like so: var sinNetwork = new Perceptron(1, 12, 1);

And it is trained to find the sin function - which outputs values between -1 and 1. How can I get the network to output between that range? is that possible? currently it outputs only between one and zero.

I know that I can transform the number but it feels messy (ie => out * 2

  • 1).

In addition to that, what if I want to change the squash function to be a linear output unit? (e.g a real number) I found that I couldn't access those properties directly from the sinNetwork object.

— Reply to this email directly or view it on GitHub https://github.com/cazala/synaptic/issues/62.

agamm commented 9 years ago

Well, you got me right there, I am just a "hobbyist" (at least for now) I had a course on it but the rest I tried to learn independently. What you wrote makes complete sense, my problem is the following link:

I tried reading how other people implemented it before doing it myself, here is an implementation in R: http://stackoverflow.com/questions/1565115/approximating-function-with-neural-network

They say that you should use a "linear output unit" - are they not using a neural network? so how does it work for them?

My guess is that when they use linout=TRUE they don't really use a perception, but maybe regression or something that isn't a strict classifier.

menduz commented 9 years ago

Yes! take a look at the comments

By linear output unit, do you mean calculating f(net) = net for the output unit? Because I've tried this and am still having the same problem. – MahlerFive Oct 14 '09 at 9:36

Exactly, a linear function f(x)=a*x – rcs Oct 14 '09 at 10:17

Why must we use linear output unit? I used sigmoid for both hidden layer and output layer and still got good results. – Sunny88 May 14 '12 at 15:14

cazala commented 9 years ago

Hey @funerr, the output of your network network is basically an array with the outputs of all the neurons in its output layer. The output of a neuron is given by its activation or squashing function. By default, the squashing function used by all neurons is a logistic sigmoid, which outputs a range between 0 and 1, but you can use pretty much any function you want as long as it is continuous and differentiable at each point. The library also includes other built in squashing functions besides logistic sigmoid, such as hyperbolic tangent, identity and hard limit.

Neuron.squash.LOGISTIC_SIGMOID
Neuron.squash.TANH
Neuron.squash.IDENTITY
Neuron.squash.HLIM

The TANH might be what you are looking for, since it ranges from -1 to 1. You can set the activation function of a neuron or a layer by passing a squash property in its constructor, or by calling the set(opts) method.

var myNeuron = new Neuron({ squash: Neuron.squash.TANH });
// or
var myNeuron = new Neuron();
myNeuron.set({ squash: Neuron.squash.TANH });
// same for Layers
var myLayer = new Layer(5, { squash: Neuron.squash.TANH });
// or
var myLayer = new Layer(5);
myLayer.set({ squash: Neuron.squash.TANH });

You can create your own activation function, but they should be able to return the result of their derivative when setting its second parameter to true. The signature for custom activation functions is the following:

squash: function(x:number, derivative:boolean) { }

agamm commented 9 years ago

@cazala, yeah I noticed that. How can I add a squashing function to a "pre-made" network? Like a perception? It sounds like a hassle to have to construct the network again just for changing the squashing function - pr maybe or is there a deeper reason?

cazala commented 9 years ago

@funerr right now there's not a way to set a squashing function to all the layers in a network, but you can set it by layer, like:

var myPerceptron = new Architect.Perceptron(2,10,10,10,1);
myPreceptron.layers.input.set({squash: Neuron.squash.TANH});
myPreceptron.layers.hidden.forEach(function(hiddenLayer){
    hiddenLayer.set({squash: Neuron.squash.TANH})
});
myPreceptron.layers.output.set({squash: Neuron.squash.TANH});
agamm commented 9 years ago

@cazala, thanks for the comment! is this something you would do a lot? maybe that should be a method like Perceptron.setSquash()?

menduz commented 9 years ago

I think it would be better on the network or layer or neuron constructor. Since the network is optimized to raw operations and methods overriden you should recompile the entire network.

Personally I don't find a case where this method would be useful.Is like changing a weight manually. Maybe it would be useful in some training scenario for speeding up at the begin of training, like error from 1 to 0.1, then change to the final squashing function and train again until get a desired error with the definitive one. Its an interesting idea. We should try it!

Saludos!

On Fri, Oct 16, 2015, 12:41 Agam notifications@github.com wrote:

@cazala https://github.com/cazala, thanks for the comment! is this something you would do a lot? maybe that should be a method like Perceptron.setSquash()?

— Reply to this email directly or view it on GitHub https://github.com/cazala/synaptic/issues/62#issuecomment-148749967.

cazala commented 9 years ago

yea, I believe a logical place to put this kind of feature would be in the Network#set() method. Actually, right now the only thing you can 'set' thru that method are the network layers, like:

myNetwork.set({
  layers: { input: ... hiden: [...], output: ... }
})

but we could also add the possibility of passing a squash option, and set it to all the layers in that network. That was the reason I left that method like set instead of setLayers or sth like that, cos I wanted to expand it later to support squash, bias, etc, same as Layer#set()... but as many other things that I start, I never quite finished it :P. Anyway, I don't think that's something that you would use a lot, but it would be a nice to have, and it would be more consistent with Layer.set()