Open CloudyDory opened 1 year ago
Hi, I am not familiar to the quantization modules in PyTorch. But you can try to use spikingjelly.activation_based.quantize
.
Another solution is that you can check the source codes of conv/linear in SpikingJelly. They are almost idential with those in PyTorch except for they support to run in multi-step mode. You can check how to modify them to support the quantization modules in PyTorch.
For faster response
You can @ the corresponding developers for your issue. Here is the division:
Yanqi-Chen
Yanqi-Chen
Lyu6PosHao
lucifer2859
AllenYolk
Lyu6PosHao
DingJianhao
Yanqi-Chen
fangwei123456
We are glad to add new developers who are volunteering to help solve issues to the above table.
Issue type
SpikingJelly version
0.0.0.0.14
Description
I am hoping to train an SNN with weight quantization in linear and convolution layers by spikingjelly. However, it seems that the linear and convolution modules in
spikingjelly.activation_based.layer
cannot be recognized by Pytorch'storch.quantization.prepare_qat()
function.Minimal code to reproduce the error/bug
This produce the following output: