wenda-LLM / wenda

闻达:一个LLM调用平台。目标为针对特定环境的高效内容生成,同时考虑个人和中小企业的计算资源局限性,以及知识安全和私密性问题
GNU Affero General Public License v3.0
6.22k stars 809 forks source link

错误name 'model' is not defined #527

Open Adolph3671 opened 4 months ago

Adolph3671 commented 4 months ago

描述錯誤 chatglm3_mode True [['cuda', 'fp16']] Exception in thread Thread-1 (load_model): Traceback (most recent call last): File "C:\wenda\WPy64-31110\python-3.11.1.amd64\Lib\threading.py", line 1038, in _bootstrap_inner self.run() File "C:\wenda\WPy64-31110\python-3.11.1.amd64\Lib\threading.py", line 975, in run self._target(*self._args, self._kwargs) File "C:\wenda\wenda\wenda.py", line 53, in load_model LLM.load_model() File "C:\wenda\wenda\llms\llm_glm6b.py", line 102, in load_model tokenizer = AutoTokenizer.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\wenda\WPy64-31110\python-3.11.1.amd64\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 643, in from_pretrained tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\wenda\WPy64-31110\python-3.11.1.amd64\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 487, in get_tokenizer_config resolved_config_file = cached_file( ^^^^^^^^^^^^ File "C:\wenda\WPy64-31110\python-3.11.1.amd64\Lib\site-packages\transformers\utils\hub.py", line 417, in cached_file resolved_file = hf_hub_download( ^^^^^^^^^^^^^^^^ File "C:\wenda\WPy64-31110\python-3.11.1.amd64\Lib\site-packages\huggingface_hub\utils_validators.py", line 112, in _inner_fn validate_repo_id(arg_value) File "C:\wenda\WPy64-31110\python-3.11.1.amd64\Lib\site-packages\huggingface_hub\utils_validators.py", line 166, in validate_repo_id raise HFValidationError( huggingface_hub.utils.validators.HFValidationError: Repo id must use alphanumeric chars or '-', '', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'model\chatglm3-6b'. No sentence-transformers model found with name model/m3e-base. Creating a new one with MEAN pooling. 重現 重現該行為的步驟: 聞達"啟動器達啟動器'' 點選 '運行服務GLM6B'

螢幕截圖 螢幕擷取畫面 2024-03-28 163318

作業系統:[Win11]

sxyseo commented 1 month ago

需要修改配置文件或者下载chatglm3-6b到wenda\model文件夹下