Vivswan / AnalogVNN

A fully modular framework for modeling and optimizing analog neural networks
https://analogvnn.readthedocs.io
Other
15 stars 3 forks source link

Question, weights & signals using the same clamp function ? #16

Closed PierrickPochelu closed 1 year ago

PierrickPochelu commented 1 year ago

Hello,

Thank you for your quick answer the other day.

I have another question regarding the back-propagation through the normalization function (e.g. clamp).

https://analogvnn.readthedocs.io/en/v1.0.0/sample_code.html

In the figure, both weights and signals need the normalization function. However, we see a different behavior during back-propagation (green arrow).

In the text of the paper i don't see any explanation for this. In the code you provided, you use the same clamp function for both signal & weights.

What is correct ? What is the intuition behind not back-propagating through the normalization ?

Vivswan commented 1 year ago

In short, we can ignore back-propagation of the normalization for both weights and signal if normalization function can be thought of as a linear function else sometimes we can ignore it and sometime we can't. (the second paper will focus more on this.)

The thought process was:

  1. The first paper of analogvnn is suppose to be as generalized as possible for multiple analog systems not just photonics. (in the second paper, probably this summer, I will focus on many different architecture of implementing neural networks in photonics and comparing them with analogvnn and introduce many new photonics layers).
  2. For weights, when I started working on it, I was think in terms of both weights implemented in PCM or coming through the laser along with inputs. So for the case of laser, we will know the normalization function but for PCM it can be more complicated, so I want to see what happens if we completely ignore it, to show that you don't have to worry about calculating gradient in each layers all the time.
  3. For inputs, I was just thinking about the case when inputs are directly coming from the laser, so I want to maximizing the correctness of the gradients, but since gradient for Reduce Precision layer and Noise layer is not possible, so I just implement the backward function in normalization layers.
  4. This was the simulation which I ran for the first paper. and since first paper is suppose to introduce AnalogVNN:
    1. Framework for everyone to actually run photonic neural networks.
    2. Give a way to do large scale hyperparameter search in analog domain.
    3. Show to the community to not just direct compare analog system with digital.
  5. So, I didn't try to optimize the gradient flow for photonics that much. (will be in second paper)
  6. But since then I have removed all the backward function from all the normalization function expect for Clamp (because it was not doing much either way)
  7. And run the test again and found little to no difference.

I am closing the issue, If it doesn't make sense then you can open this issue again.