xvyaward / owq

Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Models".
https://arxiv.org/abs/2306.02272
53 stars 5 forks source link

llama 3 8B ppl #5

Open dorsa-zeinali opened 2 months ago

dorsa-zeinali commented 2 months ago

Hi, I am reproducing your results, and does 44.55% accuracy for mmlu, 51.78% for hellaswag and 10.25 ppl for wikitext2 for llama 3 8B make sense? Please let me know. Thank you!