Closed yobuwen closed 1 year ago
If you train a quantized model with lsq or lsq+, when you inference a image, you don't need lsqprepareV1, lsqprepareV2. s lsqplusprepareV1 or lsqplusprepareV2, because the model' s params had been quantized while training.
so evaluate.py h "Floatmodel = True" #QAT or float-32 train ole
If you train a quantized model with lsq or lsq+, when you inference a image, you don't need lsqprepareV1, lsqprepareV2. s lsqplusprepareV1 or lsqplusprepareV2, because the model' s params had been quantized while training.
so evaluate.py h "Floatmodel = True" #QAT or float-32 train ole
还有一个问题就是,发现s会出现负数,但这显然是不合理的,请问有什么方式可以避免么?
You need training the lsq+ network to get high accuracy, then the weight or ccale will converge to >0, it need training not by setting
so, just training the network, you will find the last result will better
so, just training the network, you will find the last result will better
实时上,我使用LSQ+V1训练了VGG,精度达到90.481,但s值仍然有大量负值。这个现象好像很容易出现。
evaluate.py中没有lsqprepare函数,如果把其改为lsqprepareV1然后加载训练好的模型进行评估,发现精度会非常低。 请问,在仅进行推断时,加载的模型和量化操作是不是和训练时有什么不同?![image](https://user-images.githubusercontent.com/35327580/192469100-3376ad1a-89c0-46fd-8ed8-5dc6e36a9ba9.png)