Closed lovodkin93 closed 3 years ago
Hi,
the batchnorm layer calls the method dim() of QuantTensor, which is implemented in the very latest version on Github. Please check the line 284 at: https://github.com/Xilinx/brevitas/blob/master/src/brevitas/quant_tensor/init.py
I believe you can solve the problem by installing Brivitas from GitHub:
pip install git+https://github.com/Xilinx/brevitas.git
or adding the dim() method to your currently installed Brevitas:
def dim(self):
return self.value.dim()
Hello, I am trying to use your toolkit in order to perform QAT. Now, unlike the examples in your README file, my model contains Batch Normalization layers. Now, when passing the
return_quant_tensor=True
parameter to the quantConv2d layers, which are followed by the Batch Normalization layers, I keep getting the following error:It appears the quantized output of the conv2D layers, which is the input to the Batch Norm layers, doesn't have the "dim" attribute.
Here is my code:
Could you please help me as to how I can circumvent this problem? Thank you!