Closed linnengbo closed 6 months ago
🤖
嗨,@linnengbo!很高兴再次见到你。希望你一切都好。
根据您提供的错误信息,这个问题可能与设备选择有关。在初始化数据库时,程序试图在一个不存在的设备上运行,这可能是因为您的系统没有正确配置CUDA或者您的机器上没有安装GPU。
首先,您可以检查一下您的系统是否有可用的GPU设备。您可以通过运行以下Python代码来检查:
import torch
print(torch.cuda.device_count())
如果返回的结果是0,那么说明您的系统上没有可用的GPU设备。在这种情况下,您需要在代码中指定使用CPU设备。如果返回的结果大于0,那么说明您的系统上有可用的GPU设备,您需要确保在代码中指定正确的设备编号。
另外,根据我在Langchain-Chatchat仓库中找到的一些类似问题,这个问题可能也与bge-large-zh
模型的加载有关。您可以尝试以下步骤来解决这个问题:
sentence-transformers
包,通过执行pip install sentence-transformers
来安装。bge-large-zh
模型。model_config.py
文件,更新EMBEDDING_MODEL
和MODEL_PATH
为模型存储的正确路径。model_config.py
后,运行python copy_config_example.py
来应用更改。如果以上步骤不能解决您的问题,或者您需要进一步的帮助,请提供更多的信息,例如您的系统环境、Python版本、PyTorch版本等,这将有助于我们更好地理解和解决您的问题。
🤖
嗨,@linnengbo!很高兴再次见到你。希望你一切都好。
根据您提供的错误信息,这个问题可能与设备选择有关。在初始化数据库时,程序试图在一个不存在的设备上运行,这可能是因为您的系统没有正确配置CUDA或者您的机器上没有安装GPU。
首先,您可以检查一下您的系统是否有可用的GPU设备。您可以通过运行以下Python代码来检查:
import torch print(torch.cuda.device_count())
如果返回的结果是0,那么说明您的系统上没有可用的GPU设备。在这种情况下,您需要在代码中指定使用CPU设备。如果返回的结果大于0,那么说明您的系统上有可用的GPU设备,您需要确保在代码中指定正确的设备编号。
另外,根据我在Langchain-Chatchat仓库中找到的一些类似问题,这个问题可能也与
bge-large-zh
模型的加载有关。您可以尝试以下步骤来解决这个问题:
- 如果还没有安装
sentence-transformers
包,通过执行pip install sentence-transformers
来安装。- 手动从Hugging Face下载
bge-large-zh
模型。- 修改
model_config.py
文件,更新EMBEDDING_MODEL
和MODEL_PATH
为模型存储的正确路径。- 修改完
model_config.py
后,运行python copy_config_example.py
来应用更改。如果以上步骤不能解决您的问题,或者您需要进一步的帮助,请提供更多的信息,例如您的系统环境、Python版本、PyTorch版本等,这将有助于我们更好地理解和解决您的问题。
Sources
- [BUG] Langchain-chatchat 一键启动项目 python startup.py -a 报错
- 运行python init_database.py --recreate-vs 报错
- [BUG] 3. 初始化知识库和配置文件 报错 python init_database.py --recreate-vs
- init_database.py
About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
是没有gpu的,用的是macos,但是我在llm_api_stale.py把默认的gpu改为cpu后,initdatabase还是一样的错,是有哪里还需要改吗?
🤖
Based on the error message you provided, it seems that the device setting is not recognized by the system. This could be due to the fact that your MacOS system does not have a GPU and the device is set to "auto", which is not recognized as a valid device.
You mentioned that you have tried changing the default device from GPU to CPU in the llm_api_stale.py
file, but the error persists. This could be because there are other places in the code where the device is set to a GPU-specific setting.
In the Langchain-Chatchat repository, the device settings for the models are configured in the configs/server_config.py
file. You should change the device from "cuda" to "cpu" for the "chatglm3-6b" and "Qwen1.5-0.5B-Chat" models in the FSCHAT_MODEL_WORKERS dictionary like so:
FSCHAT_MODEL_WORKERS = {
...
"chatglm3-6b": {
"device": "cpu",
},
"Qwen1.5-0.5B-Chat": {
"device": "cpu",
},
...
}
Also, the "device" in the "default" configuration is set to LLM_DEVICE, which is imported from the model_config file. You should ensure that LLM_DEVICE is set to "cpu" in the model_config file.
If there are other places in the code where the device is set to "cuda" or another GPU-specific setting, you should change those to "cpu" as well. Without the full codebase, I can't provide a comprehensive list of all the places you might need to make this change.
As for the llm_api_stale.py
file, it is a script that is responsible for launching the language model service. It sets up the server, controller, and worker processes, and manages their interactions. The script uses command-line arguments to configure the server, controller, and worker processes, including device settings such as the type of device to use (CPU, CUDA, MPS, or XPU), the number of GPUs to use, and the maximum GPU memory.
I hope this helps! If you have any other questions, feel free to ask.
For more information, you can refer to the following sources:
(Langchain-3.11) localhost:Langchain-Chatchat zhangwenhua$ python init_database.py --recreate-vs recreating all vector stores 2024-02-24 15:22:01,247 - faiss_cache.py[line:92] - INFO: loading vector store in 'samples/vector_store/bge-base-zh-v1.5' from disk. 2024-02-24 15:22:01,398 - SentenceTransformer.py[line:66] - INFO: Load pretrained SentenceTransformer: /Volumes/U393/模型/bge-base-zh-v1.5 /usr/local/Caskroom/miniconda/base/envs/Langchain-3.11/lib/python3.11/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.get(instance, owner)() Batches: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.48it/s] 2024-02-24 15:22:02,754 - loader.py[line:64] - INFO: Loading faiss. 2024-02-24 15:22:02,877 - loader.py[line:66] - INFO: Successfully loaded faiss. /usr/local/Caskroom/miniconda/base/envs/Langchain-3.11/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py:121: UserWarning: Normalizing L2 is not applicable for metric type: METRIC_INNER_PRODUCT warnings.warn( Segmentation fault: 11 (Langchain-3.11) localhost:Langchain-Chatchat zhangwenhua$ /usr/local/Caskroom/miniconda/base/envs/Langchain-3.11/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '
Segmentation fault: 11
用的是macos,M2处理器, LLM_DEVICE = "mps",其他选项cpu也试过了,报错一样
初步怀疑faiss未安装, conda install faiss-cpu=1.7.4 -c pytorch 安装后,再执行,一直运行中,1小时多还未结束
初步怀疑faiss未安装, conda install faiss-cpu=1.7.4 -c pytorch 安装后,再执行,一直运行中,1小时多还未结束
faiss版本正确,还是报错,请问还有什么额外的步骤吗?
我也遇到这个问题了,求解决方案 配置:mac inter。 faiss-cpu=1.7.4 最新的项目版本
(base) apple@Mac Langchain-Chatchat % python init_database.py --recreate-vs 2024-02-22 10:19:14,731 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto recreating all vector stores 2024-02-22 10:19:15,366 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 2024-02-22 10:19:15,384 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 2024-02-22 10:19:15,385 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 2024-02-22 10:19:15,385 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 2024-02-22 10:19:15,390 - faiss_cache.py[line:94] - INFO: loading vector store in 'test/vector_store/bge-large-zh' from disk. 2024-02-22 10:19:15,390 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 2024-02-22 10:19:15,576 - SentenceTransformer.py[line:66] - INFO: Load pretrained SentenceTransformer: /Users/apple/IdeaProjects/models/BAAI/bge-large-zh Batches: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.18it/s] 2024-02-22 10:19:16,812 - loader.py[line:64] - INFO: Loading faiss. 2024-02-22 10:19:16,908 - loader.py[line:66] - INFO: Successfully loaded faiss. /Users/apple/miniconda3/lib/python3.11/site-packages/langchain_community/vectorstores/faiss.py:121: UserWarning: Normalizing L2 is not applicable for metric type: METRIC_INNER_PRODUCT warnings.warn( zsh: segmentation fault python init_database.py --recreate-vs (base) apple@Mac Langchain-Chatchat % /Users/apple/miniconda3/lib/python3.11/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '