Open jasonlyik opened 1 month ago
Note that snntorch is not jit traceable because the forward function can either return spikes only or spikes and mem. So may be trouble to add torch QuantizedLinear.
SNNTorch recommends Brevitas library for weight quantization, can look into that instead of QuantizedLinear https://github.com/jeshraghian/snntorch/blob/398c7c45498716d7a60f54e0ac92258a5fd99d41/snntorch/functional/quant.py#L47
Add torch QuantizedLinear as recognized class in connection_sparsity metric, should be another case in the if/elif statements