nicodjimenez / lstm

Minimal, clean example of lstm neural network training in python, for learning purposes.
1.72k stars 654 forks source link

Where does constant error carousel come from? #2

Closed evbo closed 8 years ago

evbo commented 8 years ago

referencing the first line in your backward pass:

# notice that top_diff_s is carried along the constant error carousel
ds = self.state.o * top_diff_h + top_diff_s

Mathematically where does the + top_diff_s come from? Is my guess that it is purely just a fudge factor to prevent the gradient from going to zero (hence preventing the vanishing gradient) accurate? Or, is there more math behind it that I'm overlooking?

Thanks again for your clarification!

nicodjimenez commented 8 years ago

@evbo I'm almost done writing up a blog post that explains all the backprop equations, hopefully that will clarify everything

nicodjimenez commented 8 years ago

@evbo please read http://nicodjimenez.github.io/2014/08/08/lstm.html which explains what the backprop code does. I will expand the article soon, but it fully addresses your question. Let me know if you find it useful / confusing.

evbo commented 8 years ago

Thanks. To help provide some more feedback, here's specifically where I'm still finding myself confused:

Below you multiply by top_diff_h (clearly per chain rule) but I'm not certain why you add top_diff_s?:

# notice that top_diff_s is carried along the constant error carousel
ds = self.state.o * top_diff_h + top_diff_s

You're explanation for recursively summing the loss, and hence recursively summing the derivatives of the loss w.r.t. h(t), makes perfect sense. But why recursively sum the derivative w.r.t. s(t)?

I know s(t) is inherently recursive per the forget gate. Does that have something to do with it? I'm sure the answer must be staring me in the face, but I've been so deeply focused that I'm missing it! Any additional light you can shed is greatly appreciated.

nicodjimenez commented 8 years ago

hi @evbo this is a good question, i've updated the tutorial http://nicodjimenez.github.io/2014/08/08/lstm.html to explain, let me know if this is clear.

evbo commented 8 years ago

Great, Now please pardon me for splitting hairs, but see equation 4.14 on page 49 of http://www.cs.toronto.edu/~graves/phd.pdf

I guess I'll never fully know until I derive that equation, but, hand-wavingly, it seems a few terms might be missing from your derivative of Loss w.r.t. s(t) in comparison to Grave's.

nicodjimenez commented 8 years ago

@evbo just because two equations "look" different does not mean they are not the same. my derivation is a relatively rigorous application of the chain rule, if you find a specific mistake in the math please let me know. i know the code is correct because i've done gradient checks, but it's possible there's a typo in the math.

evbo commented 8 years ago

Ok, thanks for confirming. One thing that still confused me was how you inferred the first line of your derivation under the "details" section. Is that a special extension of the chain rule for recursive functions? Usually it is just a product of the interior and exterior functions but yours also has a sum.

You give a good reason for the sum in layman's terms, but I was wondering if there was a hidden step in the derivation of that first line.

Thanks! Almost totally 100% on this :)