Closed Maya7991 closed 3 weeks ago
Hi,
Thanks for opening this issue. Brevitas's layers are made to work as drop-in replacement of the corresponding PyTorch ones, and based on your example, I believe there should be no issue in combining it with third party libraries even though we have no experience with SNNTorch in particular.
You are mentioning observed vs expected values. Where are the expected values coming from?
Hi @Giuseppe5 ,
I apologize for the delay. I had to look into the basics of quantizing a model in order to be able to explain my doubts here.
Use case: Train a spiking CNN with Leaky or LIF activation function in PyTorch & SNNTorch and use the INT8 weights of this trained model in my VHDL design.
From your previous reply, I understand that there is no problem in using SNNTorch along with Brevitas. However, I have been trying to calculate some channel output manually and compare it with the output of Quantized model. This is where I am seeing a difference in observed vs expected values.
I have a few assumptions on why this is happening. The quant and dequant stubs between each layer in a fake quantized model would not allow such a comparison. As this is a fake quantized model, I have to generate a True INT8 model to be able to compare it with the manual calculations I am doing.
If the above question is the problem, Can I generate a true INT8 model in which I can run a inference pass which uses only INT8 values and no FP32 values? I did not post my thoughts here for so long because I was not able to decide how much of this come under the scope of Brevitas.
note: the input to Spiking conv model comprises of 0 and 1(spike or not spike) which makes manual calculation easy. MAC reduces to just accumulation operation.
Thank you!
I still have a few questions about the set-up. If you could share a reproducible script where you show how to compute the real vs expected results, it could be easier for us to help.
If this is still an issue, please feel free to re-open and we'd be more than happy to help!
I have a Spiking convolutional neural network. It uses the Leaky(Leaky Integrate and Fire) neuron from SNNTorch library as activation function. Is it possible to use activation functions like from SNNTorch along with Brevitas. Given below is an example architecture.
Is it possible to use Brevitas along with such custom activation functions?
The purpose of quantizing my model is to extract the INT8 weights and use it for simulation of VHDL design I have written. I have recorded the INT8 weights of the quantized spiking convolutional layer(conv2). However, I observe that there is some difference in the observed values after the activation function and expected values. I would like to know if Brevitas support using custom activation functions. If yes, does it need any additional configurations?