iamkanghyunchoi / ait

It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]
GNU General Public License v3.0
30 stars 5 forks source link

Found that there is an experimental result in the paper that may be very inaccurate. #2

Closed fuchun-wang closed 2 years ago

fuchun-wang commented 2 years ago

When i played with ZeroQ codes. I was very surprised to find that the results for ZeroQ/Resnet-18/w4a4/ in Table 1 of the paper are very low, 22.58%. But when I tried to run the official ZeroQ code, the result was 47.96%. Result for w5a5 setting is 68.26 % compared to 59.6 in the paper.

By the way, your work is solid and impressive!!!

iamkanghyunchoi commented 2 years ago

Thank you for being so interested in our paper. Right after I read your issue, I newly cloned the original github repo of ZeroQ and ran it again. The result is shown below, that of ResNet-18/w4a4, which records 21.79% of top-1 accuracy. Would you elaborate your test environment?

Screen Shot 2022-04-14 at 10 35 17 PM

fuchun-wang commented 2 years ago

@iamkanghyunchoi I'm so sorry, I forgot to change something. Thank u for your quick reply. Sorry again.

iamkanghyunchoi commented 2 years ago

The problem seems to be related to activation bit setting. Would you check the activation bit again?

EDIT : Oh, you fixed it. Thank you again for paying attention to our paper!

fuchun-wang commented 2 years ago

Yes! You can just close this 'stupid' issue. Sorry for wasting your precious time.