IST-DASLab / gptq

Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
https://arxiv.org/abs/2210.17323
Apache License 2.0
1.81k stars 145 forks source link

LAMBADA evaluation accuracy #39

Open kayhanbehdin opened 11 months ago

kayhanbehdin commented 11 months ago

Hello, I've been experimenting with GPTQ and trying to replicate your LAMBADA zero-shot results. But I have been getting significantly lower accuracy (10-15% lower for OPT specifically) compared to the paper, even for the FP16 baseline. I'm using your pipeline based on LM evaluation harness. I was wondering if you have seen this before?