控制台显示如下
F:\edgeDownload\ChatGLM-webui-main\ChatGLM-webui-main>python webui.py
CUDA is not available, using cpu mode...
Traceback (most recent call last):
File "F:\edgeDownload\ChatGLM-webui-main\ChatGLM-webui-main\webui.py", line 57, in
init()
File "F:\edgeDownload\ChatGLM-webui-main\ChatGLM-webui-main\webui.py", line 28, in init
load_model()
File "F:\edgeDownload\ChatGLM-webui-main\ChatGLM-webui-main\modules\model.py", line 66, in load_model
tokenizer = AutoTokenizer.from_pretrained(cmd_opts.model_path, trust_remote_code=True)
File "C:\Users\14651\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 689, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, *kwargs)
File "C:\Users\14651\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\tokenization_utils_base.py", line 1841, in from_pretrained
return cls._from_pretrained(
File "C:\Users\14651\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\tokenization_utils_base.py", line 2004, in _from_pretrained
tokenizer = cls(init_inputs, **init_kwargs)
File "C:\Users\14651/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\619e736c6d4cd139840579c5482063b75bed5666\tokenization_chatglm.py", line 221, in init
self.sp_tokenizer = SPTokenizer(vocab_file, num_image_tokens=num_image_tokens)
File "C:\Users\14651/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\619e736c6d4cd139840579c5482063b75bed5666\tokenization_chatglm.py", line 58, in init
assert vocab_file is not None
AssertionError
控制台显示如下 F:\edgeDownload\ChatGLM-webui-main\ChatGLM-webui-main>python webui.py CUDA is not available, using cpu mode... Traceback (most recent call last): File "F:\edgeDownload\ChatGLM-webui-main\ChatGLM-webui-main\webui.py", line 57, in
init()
File "F:\edgeDownload\ChatGLM-webui-main\ChatGLM-webui-main\webui.py", line 28, in init
load_model()
File "F:\edgeDownload\ChatGLM-webui-main\ChatGLM-webui-main\modules\model.py", line 66, in load_model
tokenizer = AutoTokenizer.from_pretrained(cmd_opts.model_path, trust_remote_code=True)
File "C:\Users\14651\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 689, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, *kwargs)
File "C:\Users\14651\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\tokenization_utils_base.py", line 1841, in from_pretrained
return cls._from_pretrained(
File "C:\Users\14651\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\tokenization_utils_base.py", line 2004, in _from_pretrained
tokenizer = cls(init_inputs, **init_kwargs)
File "C:\Users\14651/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\619e736c6d4cd139840579c5482063b75bed5666\tokenization_chatglm.py", line 221, in init
self.sp_tokenizer = SPTokenizer(vocab_file, num_image_tokens=num_image_tokens)
File "C:\Users\14651/.cache\huggingface\modules\transformers_modules\THUDM\chatglm-6b\619e736c6d4cd139840579c5482063b75bed5666\tokenization_chatglm.py", line 58, in init
assert vocab_file is not None
AssertionError
win11系统,N卡,按照秋叶大佬的界面一步步下载来的。求助,十分感谢