chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
31.23k stars 5.45k forks source link

[BUG] python startup.py -a执行报错卡住--模型是本地的C:\AI\models\THUDM\chatglm3-6b #3062

Closed kevinchi8781 closed 6 months ago

kevinchi8781 commented 6 months ago

(chat007) C:\AI\chat007\Langchain-Chatchat>python startup.py -a

==============================Langchain-Chatchat Configuration============================== 操作系统:Windows-10-10.0.22631-SP0. python版本:3.11.7 | packaged by Anaconda, Inc. | (main, Dec 15 2023, 18:05:47) [MSC v.1916 64 bit (AMD64)] 项目版本:v0.2.10 langchain版本:0.0.354. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm3-6b'] @ cpu {'device': 'cuda', 'host': '127.0.0.1', 'infer_turbo': False, 'model_path': 'C:\AI\models\THUDM\chatglm3-6b', 'model_path_exists': True, 'port': 20002} 当前Embbedings模型: bge-large-zh-v1.5 @ cpu ==============================Langchain-Chatchat Configuration==============================

2024-02-23 22:57:02,837 - startup.py[line:655] - INFO: 正在启动服务: 2024-02-23 22:57:02,837 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 C:\AI\chat007\Langchain-Chatchat\logs C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\langchain_core_api\deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃 warn_deprecated( 2024-02-23 22:57:08 | ERROR | stderr | INFO: Started server process [22596] 2024-02-23 22:57:08 | ERROR | stderr | INFO: Waiting for application startup. 2024-02-23 22:57:08 | ERROR | stderr | INFO: Application startup complete. 2024-02-23 22:57:08 | ERROR | stderr | INFO: Uvicorn running on http://127.0.0.1:20000 (Press CTRL+C to quit) 2024-02-23 22:57:08 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 36bd456f ... Could not locate the tokenization_chatglm.py inside THUDM/chatglm3-6b. 2024-02-23 22:57:18 | ERROR | stderr | Process model_worker - chatglm3-6b: 2024-02-23 22:57:18 | ERROR | stderr | Traceback (most recent call last): 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\urllib3\connection.py", line 198, in _new_conn 2024-02-23 22:57:18 | ERROR | stderr | sock = connection.create_connection( 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\urllib3\util\connection.py", line 85, in create_connection 2024-02-23 22:57:18 | ERROR | stderr | raise err 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\urllib3\util\connection.py", line 73, in create_connection 2024-02-23 22:57:18 | ERROR | stderr | sock.connect(sa) 2024-02-23 22:57:18 | ERROR | stderr | TimeoutError: timed out 2024-02-23 22:57:18 | ERROR | stderr | 2024-02-23 22:57:18 | ERROR | stderr | The above exception was the direct cause of the following exception: 2024-02-23 22:57:18 | ERROR | stderr | 2024-02-23 22:57:18 | ERROR | stderr | Traceback (most recent call last): 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\urllib3\connectionpool.py", line 793, in urlopen 2024-02-23 22:57:18 | ERROR | stderr | response = self._make_request( 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\urllib3\connectionpool.py", line 491, in _make_request 2024-02-23 22:57:18 | ERROR | stderr | raise new_e 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\urllib3\connectionpool.py", line 467, in _make_request 2024-02-23 22:57:18 | ERROR | stderr | self._validate_conn(conn) 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\urllib3\connectionpool.py", line 1099, in _validate_conn 2024-02-23 22:57:18 | ERROR | stderr | conn.connect() 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\urllib3\connection.py", line 616, in connect 2024-02-23 22:57:18 | ERROR | stderr | self.sock = sock = self._new_conn() 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\urllib3\connection.py", line 207, in _new_conn 2024-02-23 22:57:18 | ERROR | stderr | raise ConnectTimeoutError( 2024-02-23 22:57:18 | ERROR | stderr | urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x0000024E65354610>, 'Connection to huggingface.co timed out. (connect timeout=10)') 2024-02-23 22:57:18 | ERROR | stderr | 2024-02-23 22:57:18 | ERROR | stderr | The above exception was the direct cause of the following exception: 2024-02-23 22:57:18 | ERROR | stderr | 2024-02-23 22:57:18 | ERROR | stderr | Traceback (most recent call last): 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\requests\adapters.py", line 486, in send 2024-02-23 22:57:18 | ERROR | stderr | resp = conn.urlopen( 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\urllib3\connectionpool.py", line 847, in urlopen 2024-02-23 22:57:18 | ERROR | stderr | retries = retries.increment( 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\urllib3\util\retry.py", line 515, in increment 2024-02-23 22:57:18 | ERROR | stderr | raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /THUDM/chatglm3-6b/resolve/main/tokenization_chatglm.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x0000024E65354610>, 'Connection to huggingface.co timed out. (connect timeout=10)')) 2024-02-23 22:57:18 | ERROR | stderr | 2024-02-23 22:57:18 | ERROR | stderr | During handling of the above exception, another exception occurred: 2024-02-23 22:57:18 | ERROR | stderr | 2024-02-23 22:57:18 | ERROR | stderr | Traceback (most recent call last): 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\huggingface_hub\file_download.py", line 1238, in hf_hub_download 2024-02-23 22:57:18 | ERROR | stderr | metadata = get_hf_file_metadata( 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn 2024-02-23 22:57:18 | ERROR | stderr | return fn(args, kwargs) 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\huggingface_hub\file_download.py", line 1631, in get_hf_file_metadata 2024-02-23 22:57:18 | ERROR | stderr | r = _request_wrapper( 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\huggingface_hub\file_download.py", line 385, in _request_wrapper 2024-02-23 22:57:18 | ERROR | stderr | response = _request_wrapper( 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\huggingface_hub\file_download.py", line 408, in _request_wrapper 2024-02-23 22:57:18 | ERROR | stderr | response = get_session().request(method=method, url=url, params) 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\requests\sessions.py", line 589, in request 2024-02-23 22:57:18 | ERROR | stderr | resp = self.send(prep, send_kwargs) 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\requests\sessions.py", line 703, in send 2024-02-23 22:57:18 | ERROR | stderr | r = adapter.send(request, kwargs) 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\huggingface_hub\utils_http.py", line 67, in send 2024-02-23 22:57:18 | ERROR | stderr | return super().send(request, args, kwargs) 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\requests\adapters.py", line 507, in send 2024-02-23 22:57:18 | ERROR | stderr | raise ConnectTimeout(e, request=request) 2024-02-23 22:57:18 | ERROR | stderr | requests.exceptions.ConnectTimeout: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /THUDM/chatglm3-6b/resolve/main/tokenization_chatglm.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x0000024E65354610>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 1106fd63-140d-44be-826e-bc225f6cbbfc)') 2024-02-23 22:57:18 | ERROR | stderr | 2024-02-23 22:57:18 | ERROR | stderr | The above exception was the direct cause of the following exception: 2024-02-23 22:57:18 | ERROR | stderr | 2024-02-23 22:57:18 | ERROR | stderr | Traceback (most recent call last): 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\transformers\utils\hub.py", line 385, in cached_file 2024-02-23 22:57:18 | ERROR | stderr | resolved_file = hf_hub_download( 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn 2024-02-23 22:57:18 | ERROR | stderr | return fn(*args, *kwargs) 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\huggingface_hub\file_download.py", line 1371, in hf_hub_download 2024-02-23 22:57:18 | ERROR | stderr | raise LocalEntryNotFoundError( 2024-02-23 22:57:18 | ERROR | stderr | huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on. 2024-02-23 22:57:18 | ERROR | stderr | 2024-02-23 22:57:18 | ERROR | stderr | The above exception was the direct cause of the following exception: 2024-02-23 22:57:18 | ERROR | stderr | 2024-02-23 22:57:18 | ERROR | stderr | Traceback (most recent call last): 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\multiprocessing\process.py", line 314, in _bootstrap 2024-02-23 22:57:18 | ERROR | stderr | self.run() 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\multiprocessing\process.py", line 108, in run 2024-02-23 22:57:18 | ERROR | stderr | self._target(self._args, self._kwargs) 2024-02-23 22:57:18 | ERROR | stderr | File "C:\AI\chat007\Langchain-Chatchat\startup.py", line 389, in run_model_worker 2024-02-23 22:57:18 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, kwargs) 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\AI\chat007\Langchain-Chatchat\startup.py", line 217, in create_model_worker_app 2024-02-23 22:57:18 | ERROR | stderr | worker = ModelWorker( 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\fastchat\serve\model_worker.py", line 77, in init 2024-02-23 22:57:18 | ERROR | stderr | self.model, self.tokenizer = load_model( 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\fastchat\model\model_adapter.py", line 348, in load_model 2024-02-23 22:57:18 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs) 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\fastchat\model\model_adapter.py", line 816, in load_model 2024-02-23 22:57:18 | ERROR | stderr | tokenizer = AutoTokenizer.from_pretrained( 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 797, in from_pretrained 2024-02-23 22:57:18 | ERROR | stderr | tokenizer_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, kwargs) 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\transformers\dynamic_module_utils.py", line 488, in get_class_from_dynamic_module 2024-02-23 22:57:18 | ERROR | stderr | final_module = get_cached_module_file( 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\transformers\dynamic_module_utils.py", line 294, in get_cached_module_file 2024-02-23 22:57:18 | ERROR | stderr | resolved_module_file = cached_file( 2024-02-23 22:57:18 | ERROR | stderr | ^^^^^^^^^^^^ 2024-02-23 22:57:18 | ERROR | stderr | File "C:\Users\1\anaconda3\envs\chat007\Lib\site-packages\transformers\utils\hub.py", line 425, in cached_file 2024-02-23 22:57:18 | ERROR | stderr | raise EnvironmentError( 2024-02-23 22:57:18 | ERROR | stderr | OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like THUDM/chatglm3-6b is not the path to a directory containing a file named tokenization_chatglm.py. 2024-02-23 22:57:18 | ERROR | stderr | Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

dosubot[bot] commented 6 months ago

🤖

Based on the context provided, it seems like you're encountering multiple issues. Let's address them one by one.

  1. Missing "tokenization_chatglm.py" file: This file is typically responsible for tokenizing the input text into a format that the model can understand. If it's missing, you might have a corrupted or incomplete model directory. You can try re-downloading the model from the source repository or the official website of the model. Refer to this issue for a similar problem and its solution.

  2. ConnectionRefusedError and MaxRetryError: These errors suggest that your program is unable to establish a connection with the 'huggingface.co' server. This could be due to a network issue, firewall settings, or the server itself might be down. Here are a few things you can try:

    • Check your internet connection and ensure it's stable.
    • If you're behind a firewall, make sure it's not blocking your program from accessing the internet.
    • Try accessing 'huggingface.co' in your web browser to see if the server is up and running.
    • If you're using a proxy, make sure it's configured correctly. You can refer to this issue where a similar problem was solved by setting the proxy address.

If the problem persists, you might want to consider using a different model that doesn't require a connection to 'huggingface.co', or you can download the model and use it locally.

As for the exact function of the "tokenization_chatglm.py" file in the "THUDM/chatglm3-6b" model and how it interacts with the rest of the Langchain-Chatchat codebase, I wasn't able to find specific information within the repository. However, typically, such a file would contain the logic for tokenizing input text into a format that the model can understand. It would be used whenever the model needs to process input or output text.

Please note that these are general suggestions based on the provided context. For a more specific solution, more detailed information about the error messages and your system setup would be needed.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
kevinchi8781 commented 6 months ago

🤖

根据提供的上下文,您似乎遇到了多个问题。让我们一一解决它们。

  1. 缺少“tokenization_chatglm.py”文件:此文件通常负责将输入文本标记化为模型可以理解的格式。如果缺少它,则可能是模型目录已损坏或不完整。您可以尝试从源存储库或模型的官方网站重新下载模型。有关类似问题及其解决方案,请参阅此问题
  2. ConnectionRefusedError 和 MaxRetryError:这些错误表明您的程序无法与“huggingface.co”服务器建立连接。这可能是由于网络问题、防火墙设置或服务器本身可能已关闭。您可以尝试以下几种方法:

    • 检查您的互联网连接并确保其稳定。
    • 如果您位于防火墙后面,请确保它不会阻止您的程序访问互联网。
    • 尝试在 Web 浏览器中访问“huggingface.co”,查看服务器是否已启动并运行。
    • 如果您使用的是代理,请确保其配置正确。您可以参考此问题,其中通过设置代理地址解决了类似的问题

如果问题仍然存在,您可能需要考虑使用不需要连接到“huggingface.co”的其他模型,或者您可以下载该模型并在本地使用它。

至于“THUDM/chatglm3-6b”模型中“tokenization_chatglm.py”文件的确切功能以及它如何与 Langchain-Chatchat 代码库的其余部分交互,我无法在存储库中找到特定信息。但是,通常,此类文件将包含用于将输入文本标记为模型可以理解的格式的逻辑。每当模型需要处理输入或输出文本时,都会使用它。

请注意,这些都是基于所提供上下文的一般建议。对于更具体的解决方案,需要有关错误消息和系统设置的更多详细信息。

来源

关于Dosu

此响应旨在有用并节省您的时间。它并不意味着是一个精确的解决方案,而是你自己研究的起点。

帮助我变得更有用!如果这有帮助,👎请留下一个👍,如果它无关紧要。

如果您想继续对话,请以 开始回复。

C:\AI\models\THUDM\chatglm3-6b 有这个文件tokenization_chatglm.py,我在model_config.py文件配置了这个, "chatglm3-6b": "C:\AI\models\THUDM\chatglm3-6b",

dosubot[bot] commented 6 months ago

🤖

Based on the issues you've described, it seems like there are a few different things going wrong. Let's address them one by one:

  1. System not locating the "tokenization_chatglm.py" file: This could be due to the file not being in the expected directory or the file not existing at all. Ensure that the file is in the correct directory and that the directory is included in your system's PATH. Unfortunately, I couldn't find specific information about the expected directory structure for the Langchain-Chatchat application in the repository. If you could provide more details about your current directory structure, it might help in diagnosing the issue.

  2. TimeoutError for 'huggingface.co': This error suggests that the system is unable to connect to the 'huggingface.co' server. This could be due to a network issue or the server being down. You can try again later or check your network connection. If the issue persists, it might be worth checking if there are any known issues with the Hugging Face server.

  3. LocalEntryNotFoundError: This error usually occurs when the system is unable to find a local entry in a database or a file. Ensure that the entry you're looking for exists. If the file does exist and is in the correct location, there might be an issue with how the file is being accessed.

  4. OSError for not being able to connect to 'https://huggingface.co' or find the file in the cached files: This error is similar to the TimeoutError. It suggests that the system is unable to connect to the 'huggingface.co' server or find the file in the cached files. You can try clearing your cache or check your network connection.

If these suggestions don't resolve your issues, please provide more details about the errors and your setup. Specifically, the full error messages and stack traces would be helpful in diagnosing the problem. Also, information about any changes you've made to the code or configuration files could be useful.

In the meantime, you might find it helpful to look at some similar issues that have been resolved in the past. For example, this issue seems to have been caused by a similar problem with the system not being able to locate a file. The solution in that case was to ensure that the file was in the correct location and that the directory was included in the system's PATH. Similarly, this issue was resolved by ensuring that all necessary files were present and correctly configured.

I hope this helps! If you have any more questions or need further clarification, don't hesitate to ask.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
zRzRzRzRzRzRzR commented 6 months ago

windows 不能出现 - 尝试改成 'chatglm3-6b'