kaixindelele / ChatPaper

Use ChatGPT to summarize the arXiv papers. 全流程加速科研,利用chatgpt进行论文全文总结+专业翻译+润色+审稿+审稿回复
https://chatwithpaper.org
Other
18.42k stars 1.93k forks source link

执行批量总结时报错:Please reduce the length of the messages. #168

Open aininot260 opened 1 year ago

aininot260 commented 1 year ago

执行的命令为

python chat_paper.py --pdf_path "/home/nvidia/Desktop/papers/"

报错部分截取如下

prompt_token_used: 1217 completion_token_used: 323 total_token_used: 1540
response_time: 11.701 s
summary_result:
 1. Title: Signed Supermasks: Extremely compressed networks through recursion

2. Authors: Moritz Freidank, Christian Winkler, Julian Thomas, Vincent F. Hendricks, Alexander K. Højbjerg

3. Affiliation: Moritz Freidank: Technical University of Denmark

4. Keywords: Neural Networks, Compression, Pruning, Lottery Ticket Hypothesis

5. Url: https://openreview.net/forum?id=nJGUmUykxIK, Github: None

6. Summary:

- (1): 本文的研究背景是,神经网络由于其固有的参数数量增长趋势,变得难以理解和训练,因此需要压缩。
- (2): 过去的方法需要彻底重新训练新的神经网络架构,而本文所提出的方法则直接对原有网络进行剪枝,极大地简化了模型结构,同时能够保证模型的性能。这种方法的研究动机得到了充分的解释。
- (3): 本文首次提出了 signed Supermasks 这一概念,即在剪枝的基础上,通过重复应用类似“乘和取反”的操作,进一步简化神经网络结构。本文还提出了一种新的初始化方法,能够使得三层的卷积神经网络中只保留70个可训练权重,从而有效地压缩了网络。
- (4): 通过一系列实验,本文证明了 signed Supermask 所提出的递归压缩方法的有效性,同时也证明了其具有普适性,能够在不同的任务上达到优秀的性能甚至超越对比方法。
prompt_token_used: 1393 completion_token_used: 462 total_token_used: 1855
response_time: 16.611 s
method_error: This model's maximum context length is 4097 tokens. However, your messages resulted in 4339 tokens. Please reduce the length of the messages.
Traceback (most recent call last):
  File "/home/nvidia/Documents/git/ChatPaper/chat_paper.py", line 239, in summary_with_chat
    chat_method_text = self.chat_method(text=text)
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/tenacity/__init__.py", line 325, in iter
    raise retry_exc.reraise()
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/tenacity/__init__.py", line 158, in reraise
    raise self.last_attempt.result()
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
  File "/home/nvidia/Documents/git/ChatPaper/chat_paper.py", line 378, in chat_method
    response = openai.ChatCompletion.create(
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4339 tokens. Please reduce the length of the messages.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/nvidia/Documents/git/ChatPaper/chat_paper.py", line 531, in <module>
    chat_paper_main(args=paper_args)
  File "/home/nvidia/Documents/git/ChatPaper/chat_paper.py", line 494, in chat_paper_main
    reader1.summary_with_chat(paper_list=paper_list)
  File "/home/nvidia/Documents/git/ChatPaper/chat_paper.py", line 247, in summary_with_chat
    chat_method_text = self.chat_method(text=text, method_prompt_token=method_prompt_token)
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/tenacity/__init__.py", line 325, in iter
    raise retry_exc.reraise()
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/tenacity/__init__.py", line 158, in reraise
    raise self.last_attempt.result()
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
  File "/home/nvidia/Documents/git/ChatPaper/chat_paper.py", line 378, in chat_method
    response = openai.ChatCompletion.create(
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/openai/api_requestor.py", line 226, in request
    resp, got_stream = self._interpret_response(result, stream)
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/openai/api_requestor.py", line 619, in _interpret_response
    self._interpret_response_line(
  File "/home/nvidia/miniforge3/envs/ChatPaper/lib/python3.10/site-packages/openai/api_requestor.py", line 679, in _interpret_response_line
    raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 8370 tokens. Please reduce the length of the messages.
kaixindelele commented 1 year ago

是这样的,批量总结的时候,如果有一个出现了问题,就会整个程序断掉,这个我一直没时间处理这个bug,只能麻烦您先去掉这个报错的论文,或者调整一下超参数。后面我有时间了再加上自适应的检测