Closed PierrickPochelu closed 1 year ago
In short, we can ignore back-propagation of the normalization for both weights and signal if normalization function can be thought of as a linear function else sometimes we can ignore it and sometime we can't. (the second paper will focus more on this.)
The thought process was:
I am closing the issue, If it doesn't make sense then you can open this issue again.
Hello,
Thank you for your quick answer the other day.
I have another question regarding the back-propagation through the normalization function (e.g. clamp).
https://analogvnn.readthedocs.io/en/v1.0.0/sample_code.html
In the figure, both weights and signals need the normalization function. However, we see a different behavior during back-propagation (green arrow).
In the text of the paper i don't see any explanation for this. In the code you provided, you use the same clamp function for both signal & weights.
What is correct ? What is the intuition behind not back-propagating through the normalization ?