vishnukashyap / MNIST-QNN

Training a Simple Quantized Neural Network using Pytorch and brevitas
1 stars 0 forks source link

Training error when using model_type : "qnn_LeNet" #1

Closed penny9287 closed 3 years ago

penny9287 commented 3 years ago

Hello, I am very appreciate for your great work. Recently, I used the code the observe the training performance, but when I trained the model with the type of "qnn_LeNet", the following problem happened:

Traceback (most recent call last): File "train.py", line 425, in main() File "train.py", line 415, in main tensorboard_writer, lr, momentum) File "train.py", line 34, in train dnn_model = qnn_model.LeNet(dataset_type).to(device) File "/home/MNIST-QNN-main/qnn_model.py", line 101, in init return_quant_tensor=False) File "/home/.local/lib/python3.7/site-packages/brevitas/nn/quant_conv.py", line 188, in init kwargs) File "/home/.local/lib/python3.7/site-packages/brevitas/nn/quant_layer.py", line 305, in init QuantWeightMixin.init(self, weight_quant, kwargs) File "/home/.local/lib/python3.7/site-packages/brevitas/nn/mixin/parameter.py", line 71, in init **kwargs) File "/home/.local/lib/python3.7/site-packages/brevitas/nn/mixin/base.py", line 104, in init "The quantizer passed does not adhere to the quantization protocol.") RuntimeError: The quantizer passed does not adhere to the quantization protocol.

So have you ever see this problem? Waiting for your reply.

vishnukashyap commented 3 years ago

Hi thank you for bringing this to my notice, the issue was that, the quantization argument for initializing the brevitas conv2d module was weight_quant_type, but in the code I was using weight_quant. The issue has been fixed, please try it out.