hugochan / KATE

Code & data accompanying the KDD 2017 paper "KATE: K-Competitive Autoencoder for Text"
BSD 3-Clause "New" or "Revised" License
142 stars 49 forks source link

how alpha get collaborated in backpropagation phase #17

Closed un-lock-me closed 6 years ago

un-lock-me commented 6 years ago

I hope it becomes my last question :octocat:

So, I am clear on the theoretical concept, but while in implementation I can not get how keras can get the alpha to participate in the back propagation. Actually I know my question is not related mainly to your implementation but the way that keras works.

From what I know when keras back propagates, it tries to update weight matrices automatically, I could not find any resource how can it gets other neurons which already being set to zero to participate with alpha percentage.

I hope I could give some hint what I mean.

Thanks for taking the time:)

un-lock-me commented 6 years ago

I am curious on this question :)

hugochan commented 6 years ago

@saria85 It is a good question! I don't think it is a question only related to Keras, any library which does back propagation should be able to work as expected.

I think you can consider it in this way. You can imagine that you have a residual connection, y = alpha x + f(x), even though you make f(x) = 0, you can still back propagate through x because of the alpha x term. Similarly, in KATE, we reset loser neurons, but the energy of loser neurons get passed to the output via the alpha amplification component.

Hope this helps.

un-lock-me commented 6 years ago

Thank you for your explanations. I need to learn in detail about backpropagation. do you know any resource regarding this? I was looking at some resources but the ones I went through did not cover clearly this part.

What I need is that, imagine that I want to backpropagate some data, but imagine it is a function rather than pure weight matrices. the same way your example saying y= alpha* x +f(x). I need to know in detail that how alpha and f(x) (which is really a function, not number) will be backpropagated with the current approaches in Neural Network model.

Thanks in advance if you can give me your idea or even resource which talk clearly :)

hugochan commented 6 years ago

@saria85 This might help https://drive.google.com/file/d/17FCuRz2emU7CJPgAASTInBi9zH4EZtNS/view?usp=sharing

un-lock-me commented 6 years ago

Thank you I will go through that then.

un-lock-me commented 6 years ago

Actually, Apart from your source, I went through http://neuralnetworksanddeeplearning.com/chap2.html but it was the same thing I already have known. I still have difficulty understanding how alpha will be assigned to neurons. my point is exactly this sentence of the paper:

Note that in the back-propagation procedure, the gradients will first flow through the winner neurons in the hidden layer and then the loser neurons via the α amplification connections.

I did not find anything in the implementation to understand how alpha just first go to winner neuron and then loser neuron via alpha amplification.

Thanks :)