D:\web\chatGLM\ChatGLM-6B> python.exe .\web_demo.py
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 25%|██████████████▎ | 2/8 [00:02<00:06, 1.14s/it]
Traceback (most recent call last):
File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 415, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 797, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 283, in init
super().init(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 419, in load_state_dict
if f.read(7) == "version":
UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 64: illegal multibyte sequence
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\web\chatGLM\ChatGLM-6B\web_demo.py", line 6, in
model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\auto\auto_factory.py", line 466, in from_pretrained
return model_class.from_pretrained(
File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 2646, in from_pretrained
) = cls._load_pretrained_model(
File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 2955, in _load_pretrained_model
state_dict = load_state_dict(shard_file)
File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 431, in load_state_dict
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for 'C:\Users\zh/.cache\huggingface\hub\models--THUDM--chatglm-6b\snapshots\4de8efebc837788ffbfc0a15663de8553da362a2\pytorch_model-00003-of-00008.bin' at 'C:\Users\zh/.cache\huggingface\hub\models--THUDM--chatglm-6b\snapshots\4de8efebc837788ffbfc0a15663de8553da362a2\pytorch_model-00003-of-00008.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
Expected Behavior
No response
Steps To Reproduce
python.exe .\web_demo.py
Environment
- OS: windows 10
- Python:3.10.6
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : True
Is there an existing issue for this?
Current Behavior
D:\web\chatGLM\ChatGLM-6B> python.exe .\web_demo.py Explicitly passing a
revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing arevision
is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing arevision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Loading checkpoint shards: 25%|██████████████▎ | 2/8 [00:02<00:06, 1.14s/it] Traceback (most recent call last): File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 415, in load_state_dict return torch.load(checkpoint_file, map_location="cpu") File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 797, in load with _open_zipfile_reader(opened_file) as opened_zipfile: File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 283, in init super().init(torch._C.PyTorchFileReader(name_or_buffer)) RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directoryDuring handling of the above exception, another exception occurred:
Traceback (most recent call last): File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 419, in load_state_dict if f.read(7) == "version": UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 64: illegal multibyte sequence
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "D:\web\chatGLM\ChatGLM-6B\web_demo.py", line 6, in
model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()
File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\auto\auto_factory.py", line 466, in from_pretrained
return model_class.from_pretrained(
File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 2646, in from_pretrained
) = cls._load_pretrained_model(
File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 2955, in _load_pretrained_model
state_dict = load_state_dict(shard_file)
File "C:\Users\zh\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 431, in load_state_dict
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for 'C:\Users\zh/.cache\huggingface\hub\models--THUDM--chatglm-6b\snapshots\4de8efebc837788ffbfc0a15663de8553da362a2\pytorch_model-00003-of-00008.bin' at 'C:\Users\zh/.cache\huggingface\hub\models--THUDM--chatglm-6b\snapshots\4de8efebc837788ffbfc0a15663de8553da362a2\pytorch_model-00003-of-00008.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
Expected Behavior
No response
Steps To Reproduce
python.exe .\web_demo.py
Environment
Anything else?
No response