Zhen-Dong / HAWQ

Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
MIT License
410 stars 83 forks source link

Issue about default HAWQ #13

Closed wowow11111 closed 3 years ago

wowow11111 commented 3 years ago

Hi, I've been working on running HAWQ based on my machine and now I finally could run the 'test_resnet_inference_time.py' file completely. Thus, I'm now working on a given zoo model and run it on gpu following your git explanation. (At last, want to run HAWQ on VTA, TVM based NPU)

I re-downloaded from baseline and followed the steps you gave and am facing few questions. First of all, except for the 'resnet18_uniform8', your models downloadable from model zoo does not contain 'quantized_checkpoint.pth.tar' file but only 'checkpoint.pth.tar' file, which leads to error [No such file or directory error]. But 'hawq_utils_resnet50.py' is hard coded based on resnet50.

So, What is the difference between checkpoint and quantized_checkpoint? Is it just okay to change from quantized_checkpoint to checkpoint in 'hawq_utils_resnet50.py' file?

If I do, then the former error(the dict_key error) occurs. How do I change the parameters as "3. change PyTorch parameters to TVM format" for the ones that only contain checkpoint.pth.tar file?

wowow11111 commented 3 years ago

Apparently, quantized_checkpoint file, which only 'resnet18_uniform8' contains, has the corresponding dict_key and the rest models which only contains checkpoint file makes errors. But still the hawq_utils_resnet python file is hard coded for resnet50. What is needed to run other models like resnet50 uniform4 or mixed precision models?

wowow11111 commented 3 years ago

Ok, I just manually made to just pull out the data needed from checkpoint.pth.tar file and manually make quantized_checkpoint.pth.tar file. Just loaded the checkpoint file and made a new dictionary with the keys that 'quantized_checkpoint' has. I think I can close this issue.