Xilinx / brevitas

Brevitas: neural network quantization in PyTorch
https://xilinx.github.io/brevitas/
Other
1.2k stars 197 forks source link

We encountred an exception when trying to retrieve "int_weight" from the model #172

Closed solitary-1 closed 4 years ago

solitary-1 commented 4 years ago

We have successfully trained the model of "LFC_1W1A" in bnn_pynq example. When we tried to retrieve "int_weight" from the model, we encountered an exception which stopped us from doing so.

1456

Here is the code we used to print out "int_weight": def forward(self, x): for mod in self.features: if isinstance(mod, QuantLinear): print(mod.int_weight()) # where we encountered the exception We want to know if there is a problem with our code or our method of getting the quantized weights. Thanks in advance.

volcacius commented 4 years ago

Hello,

What the error is saying is that basically quantization is not enabled. Are you running training using brevitas_bnn_pynq_train --network LFC_1W1A --experiments /path/to/experiments ?

Alessandro

solitary-1 commented 4 years ago

Hello,

What the error is saying is that basically quantization is not enabled. Are you running training using brevitas_bnn_pynq_train --network LFC_1W1A --experiments /path/to/experiments ?

Alessandro

Follow your steps, but there was an error in running the line of command.Look forward to your reply. 84975079-263ca080-b157-11ea-827a-869acf18389c

volcacius commented 4 years ago

Fixed by #200 .