mMrBun / AIPC

Apache License 2.0
56 stars 20 forks source link

ValueError: The following `model_kwargs` are not used by the model: ['query', 'tokenizer', 'history'] (note: typos in the generate arguments will also show up in this list) #16

Closed shuifuture closed 5 months ago

shuifuture commented 7 months ago

/home/censoft/anaconda3/envs/chatglm/bin/python /home/censoft/2tbdataset/ysc/code/llm/Chat2BI-master/web_demo.py /home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/pydantic/_internal/_config.py:322: UserWarning: Valid config keys have changed in V2:

To create a public link, set share=True in launch(). Traceback (most recent call last): File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/gradio/routes.py", line 569, in predict output = await route_utils.call_process_api( File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/gradio/route_utils.py", line 231, in call_process_api with utils.MatplotlibBackendMananger(): File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/gradio/utils.py", line 888, in exit matplotlib.use(self._original_backend) File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/matplotlib/init.py", line 1249, in use plt.switch_backend(name) File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/matplotlib/pyplot.py", line 343, in switch_backend canvas_class = module.FigureCanvas AttributeError: module 'backend_interagg' has no attribute 'FigureCanvas'. Did you mean: 'FigureCanvasAgg'? You try to use a model that was created with version 2.2.2, however, your version is 2.2.1. This might cause unexpected behavior or errors. In that case, try to update to the latest version.

Building embedder... Building corpus... Building corpus embeddings with embedder... Retrieving... Loading checkpoint shards: 100%|██████████| 7/7 [00:02<00:00, 3.46it/s] Traceback (most recent call last): File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/gradio/queueing.py", line 495, in call_prediction output = await route_utils.call_process_api( File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/gradio/route_utils.py", line 232, in call_process_api output = await app.get_blocks().process_api( File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/gradio/blocks.py", line 1561, in process_api result = await self.call_function( File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/gradio/blocks.py", line 1179, in call_function prediction = await anyio.to_thread.run_sync( File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread return await future File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run result = context.run(func, args) File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/gradio/utils.py", line 695, in wrapper response = f(args, *kwargs) File "/home/censoft/2tbdataset/ysc/code/llm/Chat2BI-master/web_demo.py", line 14, in text_analysis response_data = function_calling(text, top_k, top_p, temperature, model_type) File "/home/censoft/2tbdataset/ysc/code/llm/Chat2BI-master/core/function_call/build_function_call_pipline.py", line 29, in function_calling response, code, history = class_instance.do_chat(query=query, File "/home/censoft/2tbdataset/ysc/code/llm/Chat2BI-master/llms/chatglm3/generate.py", line 58, in do_chat code = self.client.create_echarts_code(self.last_observation) File "/home/censoft/2tbdataset/ysc/code/llm/Chat2BI-master/llms/chatglm3/client.py", line 227, in create_echarts_code output = self.model.generate(query=echarts_prompt, tokenizer=self.tokenizer, history=[]) File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(args, **kwargs) File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/transformers/generation/utils.py", line 1271, in generate self._validate_model_kwargs(model_kwargs.copy()) File "/home/censoft/anaconda3/envs/chatglm/lib/python3.10/site-packages/transformers/generation/utils.py", line 1144, in _validate_model_kwargs raise ValueError( ValueError: The following model_kwargs are not used by the model: ['query', 'tokenizer', 'history'] (note: typos in the generate arguments will also show up in this list)

shuifuture commented 7 months ago

请问你的transformers版本是多少,我的transformers==4.30.2 换成sentence-transformers 2.2.1 sentencepiece 0.1.99 tokenizers 0.12.1 transformers 4.30.0

mMrBun commented 7 months ago

transformers==4.30.2 我在依赖文件中制定了这个版本

ben-8878 commented 6 months ago

请问你的transformers版本是多少,我的transformers==4.30.2 换成sentence-transformers 2.2.1 sentencepiece 0.1.99 tokenizers 0.12.1 transformers 4.30.0

遇到同样的问题,按照这个更新了,依然不行。

mMrBun commented 6 months ago

@shuifuture @v-yunbin 我本地测试没有问题,请更新模型,到模型目录下执行git pull或者重新拉取,附上我本地的依赖版本

点击展开查看详细信息 Package Version ----------------------------- ----------- accelerate 0.27.2 aiofiles 23.2.1 altair 5.2.0 annotated-types 0.6.0 anyio 3.7.1 asttokens 2.4.1 attrs 23.2.0 certifi 2024.2.2 charset-normalizer 3.3.2 click 8.1.7 colorama 0.4.6 contourpy 1.2.0 cycler 0.12.1 decorator 5.1.1 einops 0.7.0 exceptiongroup 1.2.0 executing 2.0.1 fastapi 0.103.1 ffmpy 0.3.2 filelock 3.13.1 fonttools 4.49.0 fsspec 2024.2.0 gradio 4.20.1 gradio_client 0.11.0 h11 0.14.0 httpcore 1.0.4 httpx 0.27.0 huggingface-hub 0.21.4 idna 3.6 importlib_resources 6.1.3 ipython 8.22.2 jedi 0.19.1 Jinja2 3.1.3 joblib 1.3.2 jsonschema 4.21.1 jsonschema-specifications 2023.12.1 kiwisolver 1.4.5 markdown-it-py 3.0.0 MarkupSafe 2.1.5 matplotlib 3.8.3 matplotlib-inline 0.1.6 mdurl 0.1.2 mpmath 1.3.0 networkx 3.2.1 nltk 3.8.1 numpy 1.26.4 nvidia-cublas-cu12 12.1.3.1 nvidia-cuda-cupti-cu12 12.1.105 nvidia-cuda-nvrtc-cu12 12.1.105 nvidia-cuda-runtime-cu12 12.1.105 nvidia-cudnn-cu12 8.9.2.26 nvidia-cufft-cu12 11.0.2.54 nvidia-curand-cu12 10.3.2.106 nvidia-cusolver-cu12 11.4.5.107 nvidia-cusparse-cu12 12.1.0.106 nvidia-nccl-cu12 2.19.3 nvidia-nvjitlink-cu12 12.4.99 nvidia-nvtx-cu12 12.1.105 orjson 3.9.15 packaging 23.2 pandas 2.2.1 parso 0.8.3 pexpect 4.9.0 pillow 10.2.0 pip 23.3.1 prettytable 3.10.0 prompt-toolkit 3.0.43 psutil 5.9.8 ptyprocess 0.7.0 pure-eval 0.2.2 pydantic 2.6.3 pydantic_core 2.16.3 pydub 0.25.1 pyecharts 2.0.5 Pygments 2.17.2 pyparsing 3.1.2 python-dateutil 2.9.0.post0 python-multipart 0.0.9 pytz 2024.1 PyYAML 6.0.1 referencing 0.33.0 regex 2023.12.25 requests 2.31.0 rich 13.7.1 rpds-py 0.18.0 ruff 0.3.1 safetensors 0.4.2 scikit-learn 1.4.1.post1 scipy 1.12.0 semantic-version 2.10.0 sentence-transformers 2.2.2 sentencepiece 0.2.0 setuptools 68.2.2 shellingham 1.5.4 simplejson 3.19.2 six 1.16.0 sniffio 1.3.1 stack-data 0.6.3 starlette 0.27.0 sympy 1.12 threadpoolctl 3.3.0 tiktoken 0.6.0 tokenizers 0.13.3 tomlkit 0.12.0 toolz 0.12.1 torch 2.2.1 torchvision 0.17.1 tqdm 4.66.2 traitlets 5.14.1 transformers 4.30.2 transformers-stream-generator 0.0.4 triton 2.2.0 typer 0.9.0 typing_extensions 4.10.0 tzdata 2024.1 urllib3 2.2.1 uvicorn 0.27.1 wcwidth 0.2.13 websockets 11.0.3 wheel 0.41.2
ben-8878 commented 6 months ago

@shuifuture @v-yunbin 我本地测试没有问题,请更新模型,到模型目录下执行git pull或者重新拉取,附上我本地的依赖版本

点击展开查看详细信息 Package Version

accelerate 0.27.2 aiofiles 23.2.1 altair 5.2.0 annotated-types 0.6.0 anyio 3.7.1 asttokens 2.4.1 attrs 23.2.0 certifi 2024.2.2 charset-normalizer 3.3.2 click 8.1.7 colorama 0.4.6 contourpy 1.2.0 cycler 0.12.1 decorator 5.1.1 einops 0.7.0 exceptiongroup 1.2.0 executing 2.0.1 fastapi 0.103.1 ffmpy 0.3.2 filelock 3.13.1 fonttools 4.49.0 fsspec 2024.2.0 gradio 4.20.1 gradio_client 0.11.0 h11 0.14.0 httpcore 1.0.4 httpx 0.27.0 huggingface-hub 0.21.4 idna 3.6 importlib_resources 6.1.3 ipython 8.22.2 jedi 0.19.1 Jinja2 3.1.3 joblib 1.3.2 jsonschema 4.21.1 jsonschema-specifications 2023.12.1 kiwisolver 1.4.5 markdown-it-py 3.0.0 MarkupSafe 2.1.5 matplotlib 3.8.3 matplotlib-inline 0.1.6 mdurl 0.1.2 mpmath 1.3.0 networkx 3.2.1 nltk 3.8.1 numpy 1.26.4 nvidia-cublas-cu12 12.1.3.1 nvidia-cuda-cupti-cu12 12.1.105 nvidia-cuda-nvrtc-cu12 12.1.105 nvidia-cuda-runtime-cu12 12.1.105 nvidia-cudnn-cu12 8.9.2.26 nvidia-cufft-cu12 11.0.2.54 nvidia-curand-cu12 10.3.2.106 nvidia-cusolver-cu12 11.4.5.107 nvidia-cusparse-cu12 12.1.0.106 nvidia-nccl-cu12 2.19.3 nvidia-nvjitlink-cu12 12.4.99 nvidia-nvtx-cu12 12.1.105 orjson 3.9.15 packaging 23.2 pandas 2.2.1 parso 0.8.3 pexpect 4.9.0 pillow 10.2.0 pip 23.3.1 prettytable 3.10.0 prompt-toolkit 3.0.43 psutil 5.9.8 ptyprocess 0.7.0 pure-eval 0.2.2 pydantic 2.6.3 pydantic_core 2.16.3 pydub 0.25.1 pyecharts 2.0.5 Pygments 2.17.2 pyparsing 3.1.2 python-dateutil 2.9.0.post0 python-multipart 0.0.9 pytz 2024.1 PyYAML 6.0.1 referencing 0.33.0 regex 2023.12.25 requests 2.31.0 rich 13.7.1 rpds-py 0.18.0 ruff 0.3.1 safetensors 0.4.2 scikit-learn 1.4.1.post1 scipy 1.12.0 semantic-version 2.10.0 sentence-transformers 2.2.2 sentencepiece 0.2.0 setuptools 68.2.2 shellingham 1.5.4 simplejson 3.19.2 six 1.16.0 sniffio 1.3.1 stack-data 0.6.3 starlette 0.27.0 sympy 1.12 threadpoolctl 3.3.0 tiktoken 0.6.0 tokenizers 0.13.3 tomlkit 0.12.0 toolz 0.12.1 torch 2.2.1 torchvision 0.17.1 tqdm 4.66.2 traitlets 5.14.1 transformers 4.30.2 transformers-stream-generator 0.0.4 triton 2.2.0 typer 0.9.0 typing_extensions 4.10.0 tzdata 2024.1 urllib3 2.2.1 uvicorn 0.27.1 wcwidth 0.2.13 websockets 11.0.3 wheel 0.41.2

qwen可以,chatglm3会报这个错,chatglm3用的最新的。

mMrBun commented 6 months ago

@v-yunbin chatglm3的效果可能会让你失望,我正在开发新版本的内容,可以先用qwen进行测试。🤗

ben-8878 commented 6 months ago

@v-yunbin chatglm3的效果可能会让你失望,我正在开发新版本的内容,可以先用qwen进行测试。🤗

期待亲版本,目前最好的工具调用模型就是qwen吗

mMrBun commented 6 months ago

@v-yunbin Qwen的json理解和解析能力是7B模型领域内最强的,另外Qwen的工具调用是用langchain的提示词模板(React)进行训练的,稳定性和表现比chatglm3好。

ben-8878 commented 6 months ago

还有一个internlm2大模型也能调用工具,貌似效果不好,不知道是不是调的不对。

pjcc5 commented 5 months ago

transformer版本:4.39.0

问题报错:ValueError: The following model_kwargs are not used by the model: ['stop_words_ids'] (note: typos in the generate arguments will also show up in this list)

Qwen模型是3月1日拉的版本:Qwen1.5-7B-Chat

各位长老,这可如何是好啊???

Building embedder...
Building corpus...
Building corpus embeddings with embedder...
Retrieving...
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████| 4/4 [00:05<00:00,  1.37s/it]
WARNING:root:Some parameters are on the meta device device because they were offloaded to the cpu.
User's Query:
我们产品的销售量怎么样?

Traceback (most recent call last):
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/gradio/queueing.py", line 501, in call_prediction
    output = await route_utils.call_process_api(
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/gradio/route_utils.py", line 253, in call_process_api
    output = await app.get_blocks().process_api(
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/gradio/blocks.py", line 1695, in process_api
    result = await self.call_function(
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/gradio/blocks.py", line 1235, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/gradio/utils.py", line 692, in wrapper
    response = f(*args, **kwargs)
  File "/root/autodl-tmp/projects/Chat2BI/web_demo.py", line 14, in text_analysis
    response_data = function_calling(text, top_k, top_p, temperature, model_type)
  File "/root/autodl-tmp/projects/Chat2BI/core/function_call/build_function_call_pipline.py", line 29, in function_calling
    response, code, history = class_instance.do_chat(query=query,
  File "/root/autodl-tmp/projects/Chat2BI/llms/qwen/qwen_function_calling.py", line 151, in do_chat
    response, code, history = self.llm_with_plugin(prompt=query,
  File "/root/autodl-tmp/projects/Chat2BI/llms/qwen/qwen_function_calling.py", line 41, in llm_with_plugin
    output = self.text_completion(planning_prompt + text, stop_words=['Observation:', 'Observation:\n'])
  File "/root/autodl-tmp/projects/Chat2BI/llms/qwen/qwen_function_calling.py", line 134, in text_completion
    output = self.model.generate(input_ids, stop_words_ids=stop_words_ids)
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/transformers/generation/utils.py", line 1325, in generate
    self._validate_model_kwargs(model_kwargs.copy())
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/transformers/generation/utils.py", line 1121, in _validate_model_kwargs
    raise ValueError(
ValueError: The following `model_kwargs` are not used by the model: ['stop_words_ids'] (note: typos in the generate arguments will also show up in this list)
mMrBun commented 5 months ago

@v-yunbin @pjcc5 @shuifuture @striker-hz 新版本已经更新,请更新代码重试

Guanchaofeng commented 5 months ago

transformer版本:4.39.0

问题报错:ValueError: The following model_kwargs are not used by the model: ['stop_words_ids'] (note: typos in the generate arguments will also show up in this list)

Qwen模型是3月1日拉的版本:Qwen1.5-7B-Chat

各位长老,这可如何是好啊???

Building embedder...
Building corpus...
Building corpus embeddings with embedder...
Retrieving...
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████| 4/4 [00:05<00:00,  1.37s/it]
WARNING:root:Some parameters are on the meta device device because they were offloaded to the cpu.
User's Query:
我们产品的销售量怎么样?

Traceback (most recent call last):
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/gradio/queueing.py", line 501, in call_prediction
    output = await route_utils.call_process_api(
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/gradio/route_utils.py", line 253, in call_process_api
    output = await app.get_blocks().process_api(
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/gradio/blocks.py", line 1695, in process_api
    result = await self.call_function(
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/gradio/blocks.py", line 1235, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/gradio/utils.py", line 692, in wrapper
    response = f(*args, **kwargs)
  File "/root/autodl-tmp/projects/Chat2BI/web_demo.py", line 14, in text_analysis
    response_data = function_calling(text, top_k, top_p, temperature, model_type)
  File "/root/autodl-tmp/projects/Chat2BI/core/function_call/build_function_call_pipline.py", line 29, in function_calling
    response, code, history = class_instance.do_chat(query=query,
  File "/root/autodl-tmp/projects/Chat2BI/llms/qwen/qwen_function_calling.py", line 151, in do_chat
    response, code, history = self.llm_with_plugin(prompt=query,
  File "/root/autodl-tmp/projects/Chat2BI/llms/qwen/qwen_function_calling.py", line 41, in llm_with_plugin
    output = self.text_completion(planning_prompt + text, stop_words=['Observation:', 'Observation:\n'])
  File "/root/autodl-tmp/projects/Chat2BI/llms/qwen/qwen_function_calling.py", line 134, in text_completion
    output = self.model.generate(input_ids, stop_words_ids=stop_words_ids)
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/transformers/generation/utils.py", line 1325, in generate
    self._validate_model_kwargs(model_kwargs.copy())
  File "/root/autodl-tmp/conda/envs/Chat2BI/lib/python3.10/site-packages/transformers/generation/utils.py", line 1121, in _validate_model_kwargs
    raise ValueError(
ValueError: The following `model_kwargs` are not used by the model: ['stop_words_ids'] (note: typos in the generate arguments will also show up in this list)

我也遇到这样的问题了,请问解决了嘛

mMrBun commented 5 months ago

@Guanchaofeng 请更新代码