google / qkeras

QKeras: a quantization deep learning library for Tensorflow Keras
Apache License 2.0
532 stars 102 forks source link

Error during Quantizing MobileNetV2 #57

Open preetam19 opened 3 years ago

preetam19 commented 3 years ago

I am trying to quantize MobilnetV2 using 4-bit width, but when I run print_qstats(model), I am getting an error "A merge layer should be called on a list of inputs"
Capture2

Additionally, is there a way to implement Relu6 from Qkeras? I am also trying to implement 2-bit Quantization model in the same architecture, and the accuracy is very low ( fluctuating between 15-20% accuracy). I was wondering if you have any tips for full 2 bit quantization model. I have for now been using "QConv2D(kernel_quantizer=quantized_bits(2,2), bias_quantizer=quantized_po2(2))" with respective activation functions and Batchnormalization

preetam19 commented 3 years ago

Sorry, I accidentally closed the issue.