Open zhihaofan opened 1 year ago
Regarding the input, because to minimize the error after pytorch network training and quantization, I mapped the input to the range of (-1, 1), i.e., the original input image was (0, 255) Then the input /127 - 1
was used as the input to the network.
Can this step cause this problem? Because previous attempts input/255
were not encountering this problem.
After I did the quantization training, I found that the test results of the model transformed by qat_processor.trainable_model(allow_reused_module=True) were normal, but when I deployed it with qat_processor.deployable_model(args. output_dir, used_for_xmodel=True) the converted model has very serious error. Below is my code for quant and deploy:
When I run the quant branch, the VAL results are shown in Figure 1, and when I run the deploy branch, the VAL results are shown in Figure 2. The inputs and mods are the same for both branches, but the results are much worse. Figure1
Figure2
Any help would be great. Thanks!