cazala / synaptic

architecture-free neural network library for node.js and the browser
http://caza.la/synaptic
Other
6.92k stars 665 forks source link

Adding a second hidden layer results in same output for every input #106

Open FunkMonkey opened 8 years ago

FunkMonkey commented 8 years ago

I am quite new to neural networks. We are trying to learn points of interest (POIs) using a neural network based on the received signal strengths (RSSI) of bluetooth beacons and device sensors. Our network has 11 inputs (RSSIs of 5 beacons, some calculated values of RSSIs over time and the compass) and 5 outputs (1 output per POI).

Using a perceptron with a single hidden layer gives OK results, but once we start adding a second hidden layer, the neural network always returns the same output no matter what input we provide.

I know this is a quite generic question (especially without knowing the data, which I could still provide), but are there any reasons why a second hidden layer may completely break the neural network?

Thank you a lot!

ghost commented 8 years ago

Put your code on pastebin and then we can have a look. Sounds like a coding issue somewhere.

FunkMonkey commented 8 years ago

@Pummelchen Thanks a lot!

Here is the code on pastebin and here is the training-data that I used.

I reduced the code to a minimum (f. ex. I left out the part where I transform the RSSI values to the input of the network). The example will work, when the hiddenLayers array only has one element (e. g. 20 neurons). With two hidden layers, every input will yield the same result (see the table in the console).

Tell me, if you still need anything. Thank you for your help!

FunkMonkey commented 8 years ago

@Pummelchen I tried TANH as a squash function and due to that MSE as a cost function. A network with two hidden layers now produces different values, albeit it's still a limited set of values, but better than nothing. I will experiment some more...