Closed yuyingg0921 closed 5 years ago
Hello @yuyingg0921 , thank you for your attention to our work! As illustrated in Figure 1. in the paper, in the feedforward phase, the signal flows from the losers to the output layer via alpha amplification connections, which means, the losers do contribute to the output in some way. Therefore, when we do back propagation, the gradient signal will flow back from the output layer to those losers and we can update their weights accordingly.
Hope this helps.
Thank you !! I can understand it on theory now :) But where can I find the implementation part of back propagation? (because keras usually does it automatically?) if can't, how can we make sure the signal actually flow back from the output layer to those loser
@yuyingg0921 You can get the gradient of a variable using tf.gradients.
can you elaborate more about this part. I am also interested to track the back propagation part. Thanks :)
Please also refer to #17
Hello,
I have a question about your back-propagation. In k-sparse autoencoder, it suffers from the dead hidden neurons problem, so they solved this by scheduling the sparsity level over epochs.
but in KATE, the gradients still flow through the losers via alpha amplification connection. I know that there is no gradient flow directly from the output to losers, but I still can't understand how can those weights of losers (loser to input) be updated? or I just misunderstood something ?! Could you please examine the procedure in detail? I am very interested in this method~
Thank you very much!