chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
31.26k stars 5.45k forks source link

电脑是intel的集成显卡; 运行时告知我找不到nvcuda.dll,模型无法运行 #219

Closed ClintYue closed 11 months ago

ClintYue commented 1 year ago

您好,我的电脑是intel的集成显卡,不过CPU是i5-11400 @ 2.60GHz ,内存64G;

运行时告知我找不到nvcuda.dll,模型无法运行;

装了几个Gnvidia的开发者包后,还是同样报错。网上找了nvcuda.dll,都太旧;全C盘搜索了这个文件,找不到。

请问如何直接用CPU运行,而绕过nvcuda这个要求呢?

已经将两处的选择项写死: EMBEDDING_DEVICE = "cpu" LLM_DEVICE = "cpu"

imClumsyPanda commented 1 year ago

请问报错发生的对应项目中代码是在什么位置

ClintYue @.***>于2023年5月2日 周二19:29写道:

您好,我的电脑是intel的集成显卡,不过CPU是i5-11400 @ 2.60GHz ,内存64G;

运行时告知我找不到nvcuda.dll,模型无法运行;

装了几个Gnvidia的开发者包后,还是同样报错。网上找了nvcuda.dll,都太旧;全C盘搜索了这个文件,找不到。

请问如何直接用CPU运行,而绕过nvcuda这个要求呢?

已经将两处的选择项写死: EMBEDDING_DEVICE = "cpu" LLM_DEVICE = "cpu"

— Reply to this email directly, view it on GitHub https://github.com/imClumsyPanda/langchain-ChatGLM/issues/219, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABLH5EQ2YBPDH5ZQ4CROOGLXEDV3PANCNFSM6AAAAAAXS5GB5E . You are receiving this because you are subscribed to this thread.Message ID: @.***>

ClintYue commented 1 year ago

D:\anaconda4\python.exe D:\anaconda4\envs\langchain-ChatGLM\webui.py Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Could not find module 'nvcuda.dll' (or one of its dependencies). Try using the full path with constructor syntax. 模型未成功加载,请到页面左上角"模型配置"选项卡中重新选择后点击"加载模型"按钮 Running on local URL: http://0.0.0.0:7860

To create a public link, set share=True in launch().

代码位置未知,我想过把整个cuda里相关代码注销掉,但是后来想到nvcuda找不到,webui还是正常运行的,这样即使注销掉相关代码,该运行不了的还是运行不了

imClumsyPanda commented 1 year ago

建议先使用cli_demo测试排故,测试没问题后再使用webui进行使用,gradio的报错信息目前看下来比较不友好。

ClintYue @.***>于2023年5月2日 周二19:35写道:

D:\anaconda4\python.exe D:\anaconda4\envs\langchain-ChatGLM\webui.py Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Could not find module 'nvcuda.dll' (or one of its dependencies). Try using the full path with constructor syntax. 模型未成功加载,请到页面左上角"模型配置"选项卡中重新选择后点击"加载模型"按钮 Running on local URL: http://0.0.0.0:7860

To create a public link, set share=True in launch().

代码位置未知,我想过把整个cuda里相关代码注销掉,但是后来想到nvcuda找不到,webui还是正常运行的,这样即使注销掉相关代码,该运行不了的还是运行不了

— Reply to this email directly, view it on GitHub https://github.com/imClumsyPanda/langchain-ChatGLM/issues/219#issuecomment-1531316148, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABLH5EXAO2IIAQJATAB265TXEDWOVANCNFSM6AAAAAAXS5GB5E . You are receiving this because you commented.Message ID: @.***>

CYTand commented 1 year ago

这个貌似是huggingface的模型下载机制导致的,你手动下载模型后改成本地路径试试?