THUDM / ChatGLM-6B

ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Apache License 2.0
40.49k stars 5.2k forks source link

[BUG/Help] <更新版本后无法运行,似乎是载入模型失败了> #638

Open yoshikizh opened 1 year ago

yoshikizh commented 1 year ago

Is there an existing issue for this?

Current Behavior

D:\web\chatGLM\ChatGLM-6B> python.exe .\web_demo.py Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Loading checkpoint shards: 25%|██████████████▎ | 2/8 [00:02<00:06, 1.14s/it] Traceback (most recent call last): File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 415, in load_state_dict return torch.load(checkpoint_file, map_location="cpu") File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 797, in load with _open_zipfile_reader(opened_file) as opened_zipfile: File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 283, in init super().init(torch._C.PyTorchFileReader(name_or_buffer)) RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 419, in load_state_dict if f.read(7) == "version": UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 64: illegal multibyte sequence

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "D:\web\chatGLM\ChatGLM-6B\web_demo.py", line 6, in model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda() File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\auto\auto_factory.py", line 466, in from_pretrained return model_class.from_pretrained( File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 2646, in from_pretrained ) = cls._load_pretrained_model( File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 2955, in _load_pretrained_model state_dict = load_state_dict(shard_file) File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 431, in load_state_dict raise OSError( OSError: Unable to load weights from pytorch checkpoint file for 'C:\Users\zh/.cache\huggingface\hub\models--THUDM--chatglm-6b\snapshots\4de8efebc837788ffbfc0a15663de8553da362a2\pytorch_model-00003-of-00008.bin' at 'C:\Users\zh/.cache\huggingface\hub\models--THUDM--chatglm-6b\snapshots\4de8efebc837788ffbfc0a15663de8553da362a2\pytorch_model-00003-of-00008.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

Expected Behavior

No response

Steps To Reproduce

python.exe .\web_demo.py

Environment

- OS: windows 10
- Python:3.10.6
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : True

Anything else?

No response

winie-hy commented 1 year ago

遇到了同样的错误,请问有解决方法吗

disunlike commented 1 year ago

相同的错误+1

Shukino20001015 commented 1 year ago

+1

duzx16 commented 1 year ago

可以试一下 https://github.com/THUDM/ChatGLM-6B#%E4%BB%8E%E6%9C%AC%E5%9C%B0%E5%8A%A0%E8%BD%BD%E6%A8%A1%E5%9E%8B

WildXBird commented 1 year ago

也遇到了,重下也不行

ray-008 commented 1 year ago

要确认是不是完整的下载了,我是下载的时候出错了。导致加载不成功,重新下了就好了

rmrf commented 1 year ago

git lfs pull 方式下载回来的模型没问题,但是如果使用手动从 清华云下载模型,就会有这个报错。