THUDM / ChatGLM-6B

ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Apache License 2.0
39.96k stars 5.15k forks source link

[BUG/Help] <title>line 316, in extract_weight_to_float func( TypeError: 'NoneType' object is not callable #1467

Open xiaoming521 opened 3 months ago

xiaoming521 commented 3 months ago

Is there an existing issue for this?

Current Behavior

File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 80, in forward weight = extract_weight_to_float(quant_w, scale_w, weight_bit_width, quantization_cache=quantization_cache) File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 316, in extract_weight_to_float func( TypeError: 'NoneType' object is not callable

Expected Behavior

File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 80, in forward weight = extract_weight_to_float(quant_w, scale_w, weight_bit_width, quantization_cache=quantization_cache) File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 316, in extract_weight_to_float func( TypeError: 'NoneType' object is not callable

Steps To Reproduce

File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 80, in forward weight = extract_weight_to_float(quant_w, scale_w, weight_bit_width, quantization_cache=quantization_cache) File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 316, in extract_weight_to_float func( TypeError: 'NoneType' object is not callable

Environment

- OS:Window7
- Python:3.8
- Transformers:4.27.1
- PyTorch: 1.10
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :no

Anything else?

File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 80, in forward weight = extract_weight_to_float(quant_w, scale_w, weight_bit_width, quantization_cache=quantization_cache) File "C:\Users\19027387/.cache\huggingface\modules\transformers_modules\chatglm-6b-4\quantization.py", line 316, in extract_weight_to_float func( TypeError: 'NoneType' object is not callable