yabata / pyrenn

A Recurrent Neural Network Toolbox for Python and Matlab
GNU General Public License v3.0
99 stars 42 forks source link

Errors in documentation #7

Closed eeegnu closed 6 years ago

eeegnu commented 6 years ago

Was reading through the documentation and there were a few things I believe to be mistakes.

In this image the middle weight going from layer 3 to layer 2 should be LW 2,3 [d], rather than LW 1,2 [d]. https://raw.githubusercontent.com/yabata/pyrenn/master/doc/img/recurrent_nn.png

Here layer 1 is missing its bias values https://github.com/yabata/pyrenn/blob/master/doc/img/MLP2221_detailed.png

And while the example in the doc says LW 3,2 has 4 weights, there are only 2 in the corresponding example image (2 neurons in the M-1th hidden layer connect to a single neuron in the output layer), is this to say that it is still stored in a 2x2 matrix but two of those weight variables are actually zero's?

yabata commented 6 years ago

Hi,

thanks for pointing out the mistakes in the figures. I think I fixed them.

I didn’t understand what you meant with the last part:

And while the example in the doc says LW 3,2 has 4 weights, there are only 2 in the corresponding example image (2 neurons in the M-1th hidden layer connect to a single neuron in the output layer), is this to say that it is still stored in a 2x2 matrix but two of those weight variables are actually zero's?

Best regards

Dennis

Von: eeegnu notifications@github.com Gesendet: Freitag, 15. Juni 2018 03:30 An: yabata/pyrenn pyrenn@noreply.github.com Cc: Subscribed subscribed@noreply.github.com Betreff: [yabata/pyrenn] Errors in documentation (#7)

Was reading through the documentation and there were a few things I believe to be mistakes.

In this image the middle weight going from layer 3 to layer 2 should be LW 2,3 [d], rather than LW 1,2 [d]. https://raw.githubusercontent.com/yabata/pyrenn/master/doc/img/recurrent_nn.png

Here layer 1 is missing its bias values https://github.com/yabata/pyrenn/blob/master/doc/img/MLP2221_detailed.png

And while the example in the doc says LW 3,2 has 4 weights, there are only 2 in the corresponding example image (2 neurons in the M-1th hidden layer connect to a single neuron in the output layer), is this to say that it is still stored in a 2x2 matrix but two of those weight variables are actually zero's?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/yabata/pyrenn/issues/7 , or mute the thread https://github.com/notifications/unsubscribe-auth/AI2WJs-YWnwI1H71t2jEYijmvJtckGdvks5t8w4mgaJpZM4Uo47w . https://github.com/notifications/beacon/AI2WJsraCtzlNu8rN8az_blbl3c_BHKQks5t8w4mgaJpZM4Uo47w.gif

eeegnu commented 6 years ago

Hi, I've put it all into one picture since it is a bit confusing referencing it otherwise.

pyrenn_layer_weights

yabata commented 6 years ago

This was a mistake. Thanks!

eeegnu commented 6 years ago

Hi,

Just one more comment, I think you meant to convert the 2x2 matrix LW^3,2 into a row rather than a column in the updated docs, since there is no w^3_2,1.

Best, Eugene

yabata commented 6 years ago

👍