irlab-sdu / fuzi.mingcha

夫子•明察司法大模型是由山东大学、浪潮云、中国政法大学联合研发,以 ChatGLM 为大模型底座,基于海量中文无监督司法语料与有监督司法微调数据训练的中文司法大模型。该模型支持法条检索、案例分析、三段论推理判决以及司法对话等功能,旨在为用户提供全方位、高精准的法律咨询与解答服务。
Apache License 2.0
236 stars 17 forks source link

加载模型出错,ValueError: Unrecognized configuration class <class 'transformers_modules.fuzi-mingcha-v1_0.configuration_chatglm.ChatGLMConfig'> #13

Open zyh3826 opened 6 months ago

zyh3826 commented 6 months ago

您好按照你的方式加载模型出错,@Furyton @zwh-sdu 错误如下:

>>> model = AutoModelForCausalLM.from_pretrained(p, trust_remote_code=True)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/data/zhaoyuhang/anaconda3/envs/fuzimingcha/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 487, in from_pretrained
    raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers_modules.fuzi-mingcha-v1_0.configuration_chatglm.ChatGLMConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.
zyh3826 commented 6 months ago

还有个就是pyluceneapi.py里有一系列org.apache.lucene,请问这个是咋安装的,因为我的singularity安装后没法运行,得自己安装包

Furyton commented 6 months ago

您好按照你的方式加载模型出错,@Furyton @zwh-sdu 错误如下:

>>> model = AutoModelForCausalLM.from_pretrained(p, trust_remote_code=True)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/data/zhaoyuhang/anaconda3/envs/fuzimingcha/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 487, in from_pretrained
    raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers_modules.fuzi-mingcha-v1_0.configuration_chatglm.ChatGLMConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

您好,感谢您的关注。 这里需要使用 AutoModel 来加载模型,即

from transformers import AutoModel
model = AutoModel.from_pretrained("/path/to/fuzi-mingcha-v1_0", trust_remote_code=True)
zyh3826 commented 6 months ago

您好按照你的方式加载模型出错,@Furyton @zwh-sdu 错误如下:

>>> model = AutoModelForCausalLM.from_pretrained(p, trust_remote_code=True)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/data/zhaoyuhang/anaconda3/envs/fuzimingcha/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 487, in from_pretrained
    raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers_modules.fuzi-mingcha-v1_0.configuration_chatglm.ChatGLMConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

您好,感谢您的关注。 这里需要使用 AutoModel 来加载模型,即

from transformers import AutoModel
model = AutoModel.from_pretrained("/path/to/fuzi-mingcha-v1_0", trust_remote_code=True)

谢谢回复,没注意,习惯用AutoModelForCausalLM了 还有就是pylucene_task*的api.py里有一系列org.apache.lucene,请问这个是咋安装的呢

Furyton commented 6 months ago

您好,目前我们使用的 lucene 的安装方式确实有些繁琐,我们会在近期采用一版新的检索方案,简化安装过程,更新后会告知您

感谢您的关注和理解。

xinghen91 commented 5 months ago

image image 我整体把项目搭建起来了,通过镜像跑起来之后,用浏览器访问地址报这个错,不知道是访问地址不对还是部署哪里有问题,能帮忙解答下么,或者有没有社群能够讨论?

zwh-sdu commented 5 months ago

image image 我整体把项目搭建起来了,通过镜像跑起来之后,用浏览器访问地址报这个错,不知道是访问地址不对还是部署哪里有问题,能帮忙解答下么,或者有没有社群能够讨论?

您好,这一步部署的只是检索模块,您部署出来的这个地址是在 cli_demo.py 中检索部分请求的地址 即下面这行命令中的 "法条检索对应部署的 pylucene 地址" 和 "类案检索对应部署的 pylucene 地址"

python cli_demo.py --url_lucene_task1 "法条检索对应部署的 pylucene 地址"  --url_lucene_task2 "类案检索对应部署的 pylucene 地址"
zwh-sdu commented 5 months ago

您好按照你的方式加载模型出错,@Furyton @zwh-sdu 错误如下:

>>> model = AutoModelForCausalLM.from_pretrained(p, trust_remote_code=True)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/data/zhaoyuhang/anaconda3/envs/fuzimingcha/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 487, in from_pretrained
    raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers_modules.fuzi-mingcha-v1_0.configuration_chatglm.ChatGLMConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

您好,感谢您的关注。 这里需要使用 AutoModel 来加载模型,即

from transformers import AutoModel
model = AutoModel.from_pretrained("/path/to/fuzi-mingcha-v1_0", trust_remote_code=True)

谢谢回复,没注意,习惯用AutoModelForCausalLM了 还有就是pylucene_task*的api.py里有一系列org.apache.lucene,请问这个是咋安装的呢

您好,我们更新了 ES 检索的部署代码,您可以尝试用 ES 替代 pylucene 进行部署