-
Executing the mnist-cnn.py throws up this error. It also did not find the dorefa module.
-
Can Larq support training low precision networks larger than 1-bit ? What would it take to extend larq for that ?
Thanks
-
Hi,
I'm learning to use the distiller. When I learn to use the quantization, it has the problem.
In examples/classifier_compression/compress_classifier.py,
I change the model to my small net and…
-
Is there already a plan to add binary ops like bitcount for [XNOR-NET](http://arxiv.org/abs/1603.05279)?
bhack updated
4 years ago
-
### 1. What you did:
(1) **If you're using examples, what's the command you run:**
python alexnet-dorefa.py --gpu 0,1,2,3 --data data/ILSVRC2012 --dorefa 1,1,32
### 2. What you observed:
(1)…
-
@ppwwyyxx
### 1. What you did:
(1) **If you're using examples, what's the command you run:**
python ./alexnet-dorefa.py --gpu 0,1,2,3,4,5,6,7,8 --dorefa 1,2,6 --data ImageNet_2012
(2) **If you…
-
Hello again, thanks for the updated paper - equation 1 now makes sense to me. Could you please help me understand your intent with equation 3?
You say the following:
![dorefa v2 paper equation 3](ht…
-
In utils/quant_dorefa.py, Line 41
`weight_q = self.uniform_q(x / E) * E`
The sign values is multiplied with E outside the uniform_quatize function. So when gradient is backpropagated , it will be m…
-
Is it possible to use larq for 8 bit precision weights and activations?
I like the interface which follows the one of tf.keras, but it seems it is useful only for binary quantization.
Am I missing…
-
HI, How should I use the Quantization-Aware Training such as DoReFa?
The terminal command is right?
**python3 compress_classifier.py -a alexnet /ImageNet_Share/ --compress=/home/tzc/distiller/exampl…