chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
31.16k stars 5.44k forks source link

langchain-chatglm的dev分支在使用模型chatglm2-6b的时候出现的问题 #708

Closed includepanda closed 11 months ago

includepanda commented 1 year ago

我先拉取dev分支代码,运行chatglm-6b模型正常,切换到chatglm2-6b的时候加载模型报以下错误: ValueError: The device_map provided does not give any device for the following parameters: transformer.embedding.word_embeddings.weight, transformer.rotary_pos_emb.inv_freq, transformer.encoder.layers.0.input_layernorm.weight, transformer.encoder.layers.0.self_attention.query_key_value.weight, transformer.encoder.layers.0.self_attention.query_key_value.bias, transformer.encoder.layers.0.self_attention.dense.weight, transformer.encoder.layers.0.post_attention_layernorm.weight, transformer.encoder.layers.0.mlp.dense_h_to_4h.weight, transformer.encoder.layers.0.mlp.dense_4h_to_h.weight, transformer.encoder.layers.1.input_layernorm.weight, transformer.encoder.layers.1.self_attention.query_key_value.weight,

百度无果,chatglm-6b启动是正常的,但chatglm2-6b启动不了,求解决 1687709685875

Halflifefa commented 1 year ago

遇到了同样的问题

fangyinc commented 1 year ago

+1

SkySlity commented 1 year ago

+1

hzg0601 commented 1 year ago

https://github.com/imClumsyPanda/langchain-ChatGLM/pull/664 参考第6个特性

string-new commented 1 year ago

请问该怎么修改呢,没看懂呢

string-new commented 1 year ago

我先拉取dev分支代码,运行chatglm-6b模型正常,切换到chatglm2-6b的时候加载模型报以下错误: ValueError: The device_map provided does not give any device for the following parameters: transformer.embedding.word_embeddings.weight, transformer.rotary_pos_emb.inv_freq, transformer.encoder.layers.0.input_layernorm.weight, transformer.encoder.layers.0.self_attention.query_key_value.weight, transformer.encoder.layers.0.self_attention.query_key_value.bias, transformer.encoder.layers.0.self_attention.dense.weight, transformer.encoder.layers.0.post_attention_layernorm.weight, transformer.encoder.layers.0.mlp.dense_h_to_4h.weight, transformer.encoder.layers.0.mlp.dense_4h_to_h.weight, transformer.encoder.layers.1.input_layernorm.weight, transformer.encoder.layers.1.self_attention.query_key_value.weight,

百度无果,chatglm-6b启动是正常的,但chatglm2-6b启动不了,求解决 1687709685875

请问老兄解决了吗

hzg0601 commented 1 year ago

拉取我仓库里的llama-cpp分支,用我的代码的loader.py文件和model_config.py文件替换项目里的对应文件,把    "chatglm2-6b": {         "name": "chatglm2-6b",         "pretrained_model_name": "THUDM/chatglm2-6b",         "local_model_path": None,         "provides": "ChatGLM"     }, 加到model_config.py的llm_dict里

string-new commented 1 year ago

拉取我仓库里的llama-cpp分支,用我的代码的loader.py文件和model_config.py文件替换项目里的对应文件,把    "chatglm2-6b": {         "name": "chatglm2-6b",         "pretrained_model_name": "THUDM/chatglm2-6b",         "local_model_path": None,         "provides": "ChatGLM"     }, 加到model_config.py的llm_dict里

好的老兄,我试试

Halflifefa commented 1 year ago

将langchain-ChatGLM/models/loader/loader.py文件第130行改为

from accelerate import dispatch_model,infer_auto_device_map

在langchain-ChatGLM/models/loader/loader.py文件第144行添加如下代码(这样就无视了上面的设置,如果需要运行chatglm1或者moss需要把新加的代码注释)

self.device_map = infer_auto_device_map(model, max_memory={0: "6GiB", 1: "6GiB", 2: "6GiB","cpu": "30GiB"},

dtype=torch.int8,

        no_split_module_classes=model._no_split_modules
        )

其中max_memory设置部署时每张卡占用显存大小,我是单卡11G,设置了6G,防止后续推理爆显存,经过我的测试,即使将三国演义完整的第一回作为输入也不会爆显存,此时cuda 0显存占用为10G左右. 注释掉的dtype=torch.int8是其他issue里提供的设置,但是我运行chatglm2-6b时留着这个设置会启动失败,所以注释了.

string-new commented 1 year ago

拉取我仓库里的llama-cpp分支,用我的代码的loader.py文件和model_config.py文件替换项目里的对应文件,把    "chatglm2-6b": {         "name": "chatglm2-6b",         "pretrained_model_name": "THUDM/chatglm2-6b",         "local_model_path": None,         "provides": "ChatGLM"     }, 加到model_config.py的llm_dict里

老兄,试了你的代码确实可以跑起来,但是ui里面提示模型没有加载成功,最后没法了把多卡的代码注释掉,强制单卡的就好了

hzg0601 commented 1 year ago

参考这个 https://github.com/imClumsyPanda/langchain-ChatGLM/commit/d9a0315588b8cb7a538e68b1e02420a0f4a8311f 在loader.py增加对chatglm2的判断,更改对chatglm的判断条件 image

yhygta commented 1 year ago

感谢 @hzg0601 大佬的解决方案,我像你代码这么改了之后,提示下面的内容,请问接下来该怎么改呢,谢谢啦 ValueError: Unrecognized configuration class <class 'transformers_modules.chatglm2-6b.configuration_chatglm.ChatGLMConfig'> for this kind of AutoModel: AutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

hzg0601 commented 1 year ago

直接用我仓库里的master分支上的loader.py和model_config.py替换原代码吧

quanzhang2020 commented 1 year ago

单GPU代码能正常运行,多GPU可以采用@hzg0601方法修复bug

yhygta commented 1 year ago

替换了就可以了,感谢hzg0601大佬

staticTao commented 1 year ago

将langchain-ChatGLM/models/loader/loader.py文件第130行改为

从加速导入dispatch_model,infer_auto_device_map

在langchain-ChatGLM/models/loader/loader.py文件第144行添加如下代码(这样就无视了上面的设置,如果需要运行chatglm1或者moss需要把新添加的代码注释)

self.device_map = infer_auto_device_map(model, max_memory={0: "6GiB", 1: "6GiB", 2: "6GiB","cpu": "30GiB"}, # dtype=torch.int8, no_split_module_classes = model._no_split_modules ) 其中max_memory设置配置时每张卡占用显存大小,我是单卡11G,设置了6G,防止后续推理爆显存,经过我的测试,即使将三国演义完整的第一回作为输入也不会爆显存,此时cuda 0显存占用为10G左右。 注释掉的dtype=torch.int8是其他问题里提供的设置,但是我运行chatglm2-6b时留着这个设置会启动失败,所以注释了。

老哥你好,我更换了glm2模型,然后才选用加载自己的知识库的时候。报了以下错误 Traceback (most recent call last): File "/root/anaconda3/envs/ChatGLM/lib/python3.9/site-packages/gradio/routes.py", line 437, in run_predict output = await app.get_blocks().process_api( File "/root/anaconda3/envs/ChatGLM/lib/python3.9/site-packages/gradio/blocks.py", line 1346, in process_api result = await self.call_function( File "/root/anaconda3/envs/ChatGLM/lib/python3.9/site-packages/gradio/blocks.py", line 1074, in call_function prediction = await anyio.to_thread.run_sync( File "/root/anaconda3/envs/ChatGLM/lib/python3.9/site-packages/anyio/to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/root/anaconda3/envs/ChatGLM/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "/root/anaconda3/envs/ChatGLM/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 867, in run result = context.run(func, *args) File "/backup/projects/chatglm/langchain-ChatGLM-0.1.16/webui.py", line 188, in change_vs_name_input gr.update(choices=local_doc_qa.list_file_from_vector_store(vs_path), value=[]), \ File "/backup/projects/chatglm/langchain-ChatGLM-0.1.16/chains/local_doc_qa.py", line 351, in list_file_from_vector_store vector_store = load_vector_store(vs_path, self.embeddings) File "/backup/projects/chatglm/langchain-ChatGLM-0.1.16/chains/local_doc_qa.py", line 37, in load_vector_store return MyFAISS.load_local(vs_path, embeddings) File "/root/anaconda3/envs/ChatGLM/lib/python3.9/site-packages/langchain/vectorstores/faiss.py", line 509, in load_local return cls(embeddings.embed_query, index, docstore, index_to_docstore_id) AttributeError: 'NoneType' object has no attribute 'embed_query'

image 请问有解决方案吗?

Yang-125 commented 11 months ago

感谢 @hzg0601 大佬的解决方案,我像你代码这么改了之后,提示下面的内容,请问接下来该怎么改呢,谢谢啦 ValueError: Unrecognized configuration class <class 'transformers_modules.chatglm2-6b.configuration_chatglm.ChatGLMConfig'> for this kind of AutoModel: AutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

the same problem ,when i use codegeex2 model ,how can i solve it