IST-DASLab / gptq

Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
https://arxiv.org/abs/2210.17323
Apache License 2.0
1.89k stars 151 forks source link

Why is the wikitext-2 ppl calculated in the code lower than the ppl by lm-evaluation-harness? #40

Open Chocolife-96 opened 1 year ago

Chocolife-96 commented 1 year ago

About 50% lower. What causes the difference? Does the ppl calculation method different?