nicodjimenez / lstm

Minimal, clean example of lstm neural network training in python, for learning purposes.
1.72k stars 654 forks source link

about tan(s) #9

Open lcdevelop opened 7 years ago

lcdevelop commented 7 years ago

hi, sorry to bother you again I read this line in your code: self.state.h = self.state.s * self.state.o but when I found in the paper, it saids may be like this: self.state.h = np.tanh(self.state.s) * self.state.o would you tell me which one is right?

xylcbd commented 7 years ago

the second.

ScottMackay2 commented 7 years ago

Quote from the paper: "It is customary that the internal state first be run through a tanh activation function, as this gives the output of each cell the same dynamic range as an ordinary tanh hidden unit. However, in other neural network research, rectified linear units, which have a greater dynamic range, are easier to train. Thus it seems plausible that the nonlinear function on the internal state might be omitted."

But with the current example code it seems like adding tanh will result in a better result. Still both results are quite accurate: With tanh (100 iterations), loss: 6.31438767294e-07 Without tanh (100 iterations), loss: 2.61076356822e-06

(Note: do not confuse this tanh with the tanh at the input a.k.a. LstmState.g)

tl;dr: Without or with tanh() is both possible.

ScottMackay2 commented 7 years ago

I think the back-propagation also has to deal with this addition by probably adding a reverse tanh calculation. But because I don't fully understand the back-propagation yet, I can't pinpoint what to do exactly. Funny that the overall loss is already better without the back-prop fix.

I also tried replacing these lines in the backpropagation (located at the begin of the function top_diff_is): ds = self.state.o top_diff_h + top_diff_s do = self.state.s top_diff_h

I changed them into (added np.tanh around both s values): ds = self.state.o top_diff_h + np.tanh(top_diff_s) do = np.tanh(self.state.s) top_diff_h

This resulted in an even better loss of 4.26917706433e-07. But I am skeptic about the correctness over here.

Anyway, only saying this for people who want to add the tanh for performance improvements. I am not saying this should be added to the code. The code is more simplistic without the tanh. This makes the code better to understand for learning purposes.

ZhangPengB commented 3 years ago

I think the back-propagation also has to deal with this addition by probably adding a reverse tanh calculation. But because I don't fully understand the back-propagation yet, I can't pinpoint what to do exactly. Funny that the overall loss is already better without the back-prop fix.

I also tried replacing these lines in the backpropagation (located at the begin of the function top_diff_is): ds = self.state.o top_diff_h + top_diff_s do = self.state.s top_diff_h

I changed them into (added np.tanh around both s values): ds = self.state.o top_diff_h + np.tanh(top_diff_s) do = np.tanh(self.state.s) top_diff_h

This resulted in an even better loss of 4.26917706433e-07. But I am skeptic about the correctness over here.

Anyway, only saying this for people who want to add the tanh for performance improvements. I am not saying this should be added to the code. The code is more simplistic without the tanh. This makes the code better to understand for learning purposes.

I think the back-propagation also has to deal with this addition by probably adding a reverse tanh calculation. But because I don't fully understand the back-propagation yet, I can't pinpoint what to do exactly. Funny that the overall loss is already better without the back-prop fix.

I also tried replacing these lines in the backpropagation (located at the begin of the function top_diff_is): ds = self.state.o top_diff_h + top_diff_s do = self.state.s top_diff_h

I changed them into (added np.tanh around both s values): ds = self.state.o top_diff_h + np.tanh(top_diff_s) do = np.tanh(self.state.s) top_diff_h

This resulted in an even better loss of 4.26917706433e-07. But I am skeptic about the correctness over here.

Anyway, only saying this for people who want to add the tanh for performance improvements. I am not saying this should be added to the code. The code is more simplistic without the tanh. This makes the code better to understand for learning purposes.

hello. I have got a lot after read your commit,but i hanve a question here ,if we and tanh , the first should be: ds = self.state.o top_diff_h (1 - np.tanh(top_diff_s) ** 2)+top_diff_s; i think it is .Welcome to discuss