Open hatsuka20 opened 1 week ago
Did you have reduce precision layers in your model?
The std of gaussian noise layer was based on this equation (where leakage is same as error probability): https://github.com/Vivswan/AnalogVNN/blob/b639108b1e996d535eb292677a23e5a12908bb61/analogvnn/nn/noise/GaussianNoise.py#L75-L75
So that std is not fundamental, since it is calculate on the basis of error probability and precision. this is don't so, error probability or leakage can represent percentage of values which get corrupted in an analog system, give a more proper way of talking about errors. So that we can talk about noise level/error probability as a separate variable than precision, since std mixes them up. which is not useful in context of deep learning system in which we need to know the percentage of values which are corrupted similar to dropout probability in dropout layer.
The sample code for creating this type of architecture is present in: https://github.com/Vivswan/AnalogVNN/blob/master/sample_code.py
We tested your sample code. However, we couldn't reproduce your results.
For example, we manualy set a
std
ofGaussianNoise
as 0.0, 0.1, ..., 0.4 in the MNIST 3-Linear sample code. But an accuracy of the model did not increase.How do we reproduce the results?
Thank you.