juncongmoo / pyllama

LLaMA: Open and Efficient Foundation Language Models
GNU General Public License v3.0
2.8k stars 312 forks source link

evaluating has an extremely large value when quantize to 4bit. #105

Open JiachuanDENG opened 1 year ago

JiachuanDENG commented 1 year ago

I followed the steps try to get 4bit version of llama7b by using command python -m llama.llama_quant decapoda-research/llama-7b-hf c4 --wbits 4 --groupsize 128 --save pyllama-7B4b.pt, the script works well, but at the evaluating stage, it got a very large number 251086.96875.

Screenshot2023_06_07_134554

And when I testing with the quantized .pt file, model returns un-readable results.

Screenshot 2023-06-07 at 13 50 05

Anyone has same problem?

rapidAmbakar commented 1 year ago

Yes same issue, exatly same