Closed hopef closed 8 months ago
Hi, thank you for your interest in our work! We update the code to address the issue of high PPL for PTB datasets in the LLaMA model. Could you pull the main branch and try again? Thank you :)
Ohh, I got the correct ppl through your last commit. Thanks a lot.
Thank you for your excellent work. Why am I unable to reproduce your perplexity metrics on Penn Treebank (PTB) ?
In OWQ Paper: 12.46 Our reproduce: 56.033756256103516
Here is the output after I execute the "python main.py huggyllama/llama-7b c4 --wbits 3 --target_bit 3.01" command. Thank you and looking forward to your reply.