chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
31.92k stars 5.56k forks source link

[BUG] 为啥主机/服务器不联网不能正常启动服务? #220

Closed Ch4mpa9ne closed 1 year ago

Ch4mpa9ne commented 1 year ago

问题描述 / Problem Description 在下载完所有所需要模型,且尝试确认对话或其他服务可用的情况下,服务器断网并重启服务,此时显示模型加载失败,手动点击重新加载模型也是相同的报错。

复现问题的步骤 / Steps to Reproduce

  1. 执行 python webui.py
  2. 点击 模型配置
  3. 滚动到 重新加载模型
  4. 问题出现 模型未成功重新加载,请到页面左上角"模型配置"选项卡中重新选择后点击"加载模型"按钮

预期的结果 / Expected Result 正常服务

实际结果 / Actual Result 模型未成功重新加载,请到页面左上角"模型配置"选项卡中重新选择后点击"加载模型"按钮

环境信息 / Environment Information

附加信息 / Additional Information

以下是联网状态下启动的输出,启动webui后可以正常访问服务与加载模型,也能正常使用对话、加载知识库、从知识库问答等功能。

(chatGLM) a@b:~/chatGLM/langchain-ChatGLM$ python webui.py 
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:08<00:00,  1.05s/it]
No sentence-transformers model found with name /home/a/.cache/torch/sentence_transformers/GanymedeNil_text2vec-large-chinese. Creating a new one with MEAN pooling.
模型已成功加载,可以开始对话,或从右侧选择模式后开始对话
Running on local URL:  http://0.0.0.0:6006

To create a public link, set `share=True` in `launch()`.

以下是不联网时的输出,可以正常启动webui并在浏览器访问,但是点加载模型时始终会显示模型未正常加载

(chatGLM) a@b:~/chatGLM/langchain-ChatGLM$ python webui.py 
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:08<00:00,  1.05s/it]
HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/GanymedeNil/text2vec-large-chinese (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f91c62d9be0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))
模型未成功加载,请到页面左上角"模型配置"选项卡中重新选择后点击"加载模型"按钮
Running on local URL:  http://0.0.0.0:6006

To create a public link, set `share=True` in `launch()`.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:08<00:00,  1.01s/it]
HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/GanymedeNil/text2vec-large-chinese (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f91c63dd580>: Failed to establish a new connection: [Errno 111] Connection refused'))
模型未成功重新加载,请到页面左上角"模型配置"选项卡中重新选择后点击"加载模型"按钮

因为最终的部署环境是无法联网的,想问问不联网咋正常启动服务,感谢感谢

后面在断网状态执行了一下python cli_demo.py,报错如下。似乎是http代理的问题,但是所有模型不是已经都本地拉取完成了么?为什么会用到网络?

(chatGLM) a@b:~/chatGLM/langchain-ChatGLM$ python cli_demo.py 
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:12<00:00,  1.56s/it]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/urllib3/connection.py:174 in    │
│ _new_conn                                                                                        │
│                                                                                                  │
│   171 │   │   │   extra_kw["socket_options"] = self.socket_options                               │
│   172 │   │                                                                                      │
│   173 │   │   try:                                                                               │
│ ❱ 174 │   │   │   conn = connection.create_connection(                                           │
│   175 │   │   │   │   (self._dns_host, self.port), self.timeout, **extra_kw                      │
│   176 │   │   │   )                                                                              │
│   177                                                                                            │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/urllib3/util/connection.py:95   │
│ in create_connection                                                                             │
│                                                                                                  │
│    92 │   │   │   │   sock = None                                                                │
│    93 │                                                                                          │
│    94 │   if err is not None:                                                                    │
│ ❱  95 │   │   raise err                                                                          │
│    96 │                                                                                          │
│    97 │   raise socket.error("getaddrinfo returns an empty list")                                │
│    98                                                                                            │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/urllib3/util/connection.py:85   │
│ in create_connection                                                                             │
│                                                                                                  │
│    82 │   │   │   │   sock.settimeout(timeout)                                                   │
│    83 │   │   │   if source_address:                                                             │
│    84 │   │   │   │   sock.bind(source_address)                                                  │
│ ❱  85 │   │   │   sock.connect(sa)                                                               │
│    86 │   │   │   return sock                                                                    │
│    87 │   │                                                                                      │
│    88 │   │   except socket.error as e:                                                          │
╰──────────────────────────────���───────────────────────────────────────────────────────────────────╯
OSError: [Errno 101] Network is unreachable

During handling of the above exception, another exception occurred:

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/urllib3/connectionpool.py:703   │
│ in urlopen                                                                                       │
│                                                                                                  │
│    700 │   │   │   │   self._prepare_proxy(conn)                                                 │
│    701 │   │   │                                                                                 │
│    702 │   │   │   # Make the request on the httplib connection object.                          │
│ ❱  703 │   │   │   httplib_response = self._make_request(                                        │
│    704 │   │   │   │   conn,                                                                     │
│    705 │   │   │   │   method,                                                                   │
│    706 │   ���   │   │   url,                                                                      │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/urllib3/connectionpool.py:386   │
│ in _make_request                                                                                 │
│                                                                                                  │
│    383 │   │                                                                                     │
│    384 │   │   # Trigger any extra validation we need to do.                                     │
│    385 │   │   try:                                                                              │
│ ❱  386 │   │   │   self._validate_conn(conn)                                                     │
│    387 │   │   except (SocketTimeout, BaseSSLError) as e:                                        │
│    388 │   │   │   # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.         │
│    389 │   │   │   self._raise_timeout(err=e, url=url, timeout_value=conn.timeout)               │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/urllib3/connectionpool.py:1042  │
│ in _validate_conn                                                                                │
│                                                                                                  │
│   1039 │   │                                                                                     │
│   1040 │   │   # Force connect early to allow us to validate the connection.                     │
│   1041 │   │   if not getattr(conn, "sock", None):  # AppEngine might not have  `.sock`          │
│ ❱ 1042 │   │   │   conn.connect()                                                                │
│   1043 │   │                                                                                     │
│   1044 │   │   if not conn.is_verified:                                                          │
│   1045 │   │   │   warnings.warn(                                                                │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/urllib3/connection.py:363 in    │
│ connect                                                                                          │
│                                                                                                  │
│   360 │                                                                                          │
│   361 │   def connect(self):                                                                     │
│   362 │   │   # Add certificate verification                                                     │
│ ❱ 363 │   │   self.sock = conn = self._new_conn()                                                │
│   364 │   │   hostname = self.host                                                               │
│   365 │   │   tls_in_tls = False                                                                 │
│   366                                                                                            │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/urllib3/connection.py:186 in    │
│ _new_conn                                                                                        │
│                                                                                                  │
│   183 │   │   │   )                                                                              │
│   184 │   │                                                                                      │
│   185 │   │   except SocketError as e:                                                           │
│ ❱ 186 │   │   │   raise NewConnectionError(                                                      │
│   187 │   │   │   │   self, "Failed to establish a new connection: %s" % e                       │
│   188 │   │   │   )                                                                              │
│   189                                                                                            │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7fef769a24f0>: Failed to establish a new connection: [Errno 101] Network is 
unreachable

During handling of the above exception, another exception occurred:

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/requests/adapters.py:487 in     │
│ send                                                                                             │
│                                                                                                  │
│   484 │   │   │   timeout = TimeoutSauce(connect=timeout, read=timeout)                          │
│   485 │   │                                                                                      │
│   486 │   │   try:                                                                               │
│ ❱ 487 │   │   │   resp = conn.urlopen(                                                           │
│   488 │   │   │   │   method=request.method,                                                     │
│   489 │   │   │   │   url=url,                                                                   │
│   490 │   │   │   │   body=request.body,                                                         │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/urllib3/connectionpool.py:787   │
│ in urlopen                                                                                       │
│                                                                                                  │
│    784 │   │   │   elif isinstance(e, (SocketError, HTTPException)):                             │
│    785 │   │   │   │   e = ProtocolError("Connection aborted.", e)                               │
│    786 │   │   │                                                                                 │
│ ❱  787 │   │   │   retries = retries.increment(                                                  │
│    788 │   │   │   │   method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]           │
│    789 │   │   │   )                                                                             │
│    790 │   │   │   retries.sleep()                                                               │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/urllib3/util/retry.py:592 in    │
│ increment                                                                                        │
│                                                                                                  │
│   589 │   │   )                                                                                  │
│   590 │   │                                                                                      │
│   591 │   │   if new_retry.is_exhausted():                                                       │
│ ❱ 592 │   │   │   raise MaxRetryError(_pool, url, error or ResponseError(cause))                 │
│   593 │   │                                                                                      │
│   594 │   │   log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)                  │
│   595                                                                                            │
╰──────────��───────────────────────────────────────────────────────────────────────────────────────╯
MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/GanymedeNil/text2vec-large-chinese 
(Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fef769a24f0>: Failed to establish a new connection: [Errno 101] 
Network is unreachable'))

During handling of the above exception, another exception occurred:

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/a/chatGLM/langchain-ChatGLM/cli_demo.py:19 in <module>                               │
│                                                                                                  │
│   16                                                                                             │
│   17 if __name__ == "__main__":                                                                  │
│   18 │   local_doc_qa = LocalDocQA()                                                             │
│ ❱ 19 │   local_doc_qa.init_cfg(llm_model=LLM_MODEL,                                              │
│   20 │   │   │   │   │   │     embedding_model=EMBEDDING_MODEL,                                  │
│   21 │   │   │   │   │   │     embedding_device=EMBEDDING_DEVICE,                                │
│   22 │   │   │   │   │   │     llm_history_len=LLM_HISTORY_LEN,                                  │
│                                                                                                  │
│ /home/a/chatGLM/langchain-ChatGLM/chains/local_doc_qa.py:137 in init_cfg                   │
│                                                                                                  │
│   134 │   │   │   │   │   │   │   use_ptuning_v2=use_ptuning_v2)                                 │
│   135 │   │   self.llm.history_len = llm_history_len                                             │
│   136 │   │                                                                                      │
│ ❱ 137 │   │   self.embeddings = HuggingFaceEmbeddings(model_name=embedding_model_dict[embeddin   │
│   138 │   │   │   │   │   │   │   │   │   │   │   │   model_kwargs={'device': embedding_device   │
│   139 │   │   self.top_k = top_k                                                                 │
│   140                                                                                            │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/langchain/embeddings/huggingfac │
│ e.py:46 in __init__                                                                              │
│                                                                                                  │
│    43 │   │   try:                                                                               │
│    44 │   │   │   import sentence_transformers                                                   │
│    45 │   │   │                                                                                  │
│ ❱  46 │   │   │   self.client = sentence_transformers.SentenceTransformer(                       │
│    47 │   │   │   │   self.model_name, cache_folder=self.cache_folder, **self.model_kwargs       │
│    48 │   │   │   )                                                                              │
│    49 │   │   except ImportError:                                                                │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/sentence_transformers/SentenceT │
│ ransformer.py:87 in __init__                                                                     │
│                                                                                                  │
│    84 │   │   │   │                                                                              │
│    85 │   │   │   │   if not os.path.exists(os.path.join(model_path, 'modules.json')):           │
│    86 │   │   │   │   │   # Download from hub with caching                                       │
│ ❱  87 │   │   │   │   │   snapshot_download(model_name_or_path,                                  │
│    88 │   │   │   │   │   │   │   │   │   │   cache_dir=cache_folder,                            │
│    89 │   │   │   │   │   │   │   │   │   │   library_name='sentence-transformers',              │
│    90 │   │   │   │   │   │   │   │   │   │   library_version=__version__,                       │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/sentence_transformers/util.py:4 │
│ 42 in snapshot_download                                                                          │
│                                                                                                  │
│   439 │   elif use_auth_token:                                                                   │
│   440 │   │   token = HfFolder.get_token()                                                       │
│   441 │                                                                                          │
│ ❱ 442 │   model_info = _api.model_info(repo_id=repo_id, revision=revision, token=token)          │
│   443 │                                                                                          │
│   444 │   storage_folder = os.path.join(                                                         │
│   445 │   │   cache_dir, repo_id.replace("/", "_")                                               │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/huggingface_hub/utils/_validato │
│ rs.py:120 in _inner_fn                                                                           │
│                                                                                                  │
│   117 │   │   if check_use_auth_token:                                                           │
│   118 │   │   │   kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=ha   │
│   119 │   │                                                                                      │
│ ❱ 120 │   │   return fn(*args, **kwargs)                                                         │
│   121 │                                                                                          │
│   122 │   return _inner_fn  # type: ignore                                                       │
│   123                                                                                            │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/huggingface_hub/hf_api.py:1603  │
│ in model_info                                                                                    │
│                                                                                                  │
│   1600 │   │   │   params["securityStatus"] = True                                               │
│   1601 │   │   if files_metadata:                                                                │
│   1602 │   │   │   params["blobs"] = True                                                        │
│ ❱ 1603 │   │   r = get_session().get(path, headers=headers, timeout=timeout, params=params)      │
│   1604 │   │   hf_raise_for_status(r)                                                            │
│   1605 │   │   d = r.json()                                                                      │
│   1606 │   │   return ModelInfo(**d)                                                             │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/requests/sessions.py:600 in get │
│                                                                                                  │
│   597 │   │   """                                                                                │
│   598 │   │                                                                                      │
│   599 │   │   kwargs.setdefault("allow_redirects", True)                                         │
│ ❱ 600 │   │   return self.request("GET", url, **kwargs)                                          │
│   601 │                                                                                          │
│   602 │   def options(self, url, **kwargs):                                                      │
│   603 │   │   r"""Sends a OPTIONS request. Returns :class:`Response` object.                     │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/requests/sessions.py:587 in     │
│ request                                                                                          │
│                                                                                                  │
│   584 │   │   │   "allow_redirects": allow_redirects,                                            │
│   585 │   │   }                                                                                  │
│   586 │   │   send_kwargs.update(settings)                                                       │
│ ❱ 587 │   │   resp = self.send(prep, **send_kwargs)                                              │
│   588 │   │                                                                                      │
│   589 │   │   return resp                                                                        │
│   590                                                                                            │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/requests/sessions.py:701 in     │
│ send                                                                                             │
│                                                                                                  │
│   698 │   │   start = preferred_clock()                                                          │
│   699 │   │                                                                                      │
│   700 │   │   # Send the request                                                                 │
│ ❱ 701 │   │   r = adapter.send(request, **kwargs)                                                │
│   702 │   │                                                                                      │
│   703 │   │   # Total elapsed time of the request (approximately)                                │
│   704 │   │   elapsed = preferred_clock() - start                                                │
│                                                                                                  │
│ /home/a/anaconda3/envs/chatGLM/lib/python3.8/site-packages/requests/adapters.py:520 in     │
│ send                                                                                             │
│                                                                                                  │
│   517 │   │   │   │   # This branch is for urllib3 v1.22 and later.                              │
│   518 │   │   │   │   raise SSLError(e, request=request)                                         │
│   519 │   │   │                                                                                  │
│ ❱ 520 │   │   │   raise ConnectionError(e, request=request)                                      │
│   521 │   │                                                                                      │
│   522 │   │   except ClosedPoolError as e:                                                       │
��   523 │   │   │   raise ConnectionError(e, request=request)                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ConnectionError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: 
/api/models/GanymedeNil/text2vec-large-chinese (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fef769a24f0>: Failed
to establish a new connection: [Errno 101] Network is unreachable'))
imClumsyPanda commented 1 year ago

从报错信息中看,似乎llm模型在离线状态下已经正确加载,但embedding模型加载出现问题。

请检查项目所在路径中configs/model_config.py文件中embedding_model_dict中text2vec的路径是否已修改为本地绝对路径

Ch4mpa9ne commented 1 year ago

从报错信息中看,似乎llm模型在离线状态下已经正确加载,但embedding模型加载出现问题。

请检查项目所在路径中configs/model_config.py文件中embedding_model_dict中text2vec的路径是否已修改为本地绝对路径

为啥会出现这种情况?而且在联网状态下,初次使用时,我似乎已经下载了embedding模型,并成功使用。

我在访问~/.cache/huggingface路径检查时确实没有发现embedding模型相关的文件,只发现了llm模型,当然这可能是我在之前下载。

所以webui.py所下载的是模型文件还是使用在线api?所下载的模型文件默认会放在哪里?

以下是我的config/model_config.py部分内容,其余均保持默认。既然不论是否联网,llm_model_dict能读取到模型,那应该读取模型文件路径在~/.cache/huggingface?联网状态显示下载了embedding模型,但~/.cache/huggingface却又没有,所以联网状态使用的是api?

embedding_model_dict = {
#    "ernie-tiny": "nghuyong/ernie-3.0-nano-zh",
    "ernie-base": "nghuyong/ernie-3.0-base-zh",
    "text2vec-base": "shibing624/text2vec-base-chinese",
    "text2vec": "GanymedeNil/text2vec-large-chinese",
}

llm_model_dict = {
    "chatyuan": "ClueAI/ChatYuan-large-v2",
#    "chatglm-6b-int4-qe": "THUDM/chatglm-6b-int4-qe",
#    "chatglm-6b-int4": "THUDM/chatglm-6b-int4",
#    "chatglm-6b-int8": "THUDM/chatglm-6b-int8",
    "chatglm-6b": "THUDM/chatglm-6b",
}

经过测试,似乎即使~/.cache/huggingface存在embedding模型,也会优先使用api。

TTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/GanymedeNil/text2vec-large-chinese (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fb28c5d8ac0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))
模型未成功加载,请到页面左上角"模型配置"选项卡中重新选择后点击"加载模型"按钮

此时将config/model_config.py改成绝对路径则可以加载模型。

embedding_model_dict = {
#    "ernie-tiny": "nghuyong/ernie-3.0-nano-zh",
    "ernie-base": "nghuyong/ernie-3.0-base-zh",
    "text2vec-base": "shibing624/text2vec-base-chinese",
#    "text2vec": "GanymedeNil/text2vec-large-chinese",
    "text2vec": "/home/a/.cache/huggingface/hub/models--GanymedeNil--text2vec-large-chinese/snapshots/b23825b5841818578dd225b5420c4b026ff58aa3",
}
imClumsyPanda commented 1 year ago

本项目目前没有调用api的过程,上述报错信息中的api可能是下载embedding模型过程中调用的api。

建议如果模型已经下载到本地,就把configs/model_config里的两个model_dict中要选择的模型对应值修改为本地绝对路径,不然也会先检查是否能够联网,才会开始下载模型。

具体下载后修改方式可以参考 README 中常见问题

imClumsyPanda commented 1 year ago

除此之外,离线环境下运行,建议将gradio版本修改为3.21

Ch4mpa9ne commented 1 year ago

感谢作者。

如果你也像我一样有即将要到离线环境部署的需求,我的建议是:创建一个用户,将该用户的home目录挂载到一块独立的磁盘上,使用conda管理python环境,此时conda、huggingface缓存都会存储在该用户的home目录下,在联网状态下测试好所有使用,并尝试拔网线测试好。最后只需要将该磁盘插到离线的服务器中,重新生成用户,挂载磁盘到home目录即可。

相信我你不会想去整proxy的事,坑一堆。