kaixindelele / ChatPaper

Use ChatGPT to summarize the arXiv papers. 全流程加速科研,利用chatgpt进行论文全文总结+专业翻译+润色+审稿+审稿回复
https://chatwithpaper.org
Other
18.19k stars 1.91k forks source link

Some papers can't be processed #231

Open mactone opened 1 year ago

mactone commented 1 year ago

First, I would like to thank you for opensource this. It's a good tool.

When I tried to process pdf files in a folder using python .\chat_paper.py --key_word "ammonia combustion industrial" --pdf_path "C:\Users\csc-1\Downloads\Papers"

Most of the pdfs can be processed except one this one A Review on Combustion Characteristics of Ammonia as a Carbon-Free Fuel

Error shown below: `Key word: ammonia combustion industrial Query: all: ChatGPT robot Sort: SortCriterion.Relevance root: C:\Users\csc-1\Downloads\Papers dirs: [] files: ['A review on combustion characteristics of ammonia as a carbon-free fuel-Jun Li-2021.pdf'] max_font_sizes [9.962599754333496, 11.955100059509277, 11.955100059509277, 11.955100059509277, 11.955100059509277, 11.955100059509277, 11.955100059509277, 11.955100059509277, 11.955100059509277, 20.92169952392578] section_page_dict {'Introduction': 0, 'Results': 9} 0 Introduction 0 start_page, end_page: 0 9 1 Results 9 start_page, end_page: 9 15 ------------------paper_num: 1------------------ 0 A review on combustion characteristics of ammonia as a carbon-free fuel-Jun Li-2021.pdf summary_error: This model's maximum context length is 4097 tokens. However, your messages resulted in 4316 tokens. Please reduce the length of the messages. <class 'openai.error.InvalidRequestError'> chat_paper.py 453 Traceback (most recent call last): File "C:\Users\csc-1\python\ChatPaper\chat_paper.py", line 453, in summary_with_chat chat_summary_text = self.chat_summary(text=text) File "C:\Users\csc-1\miniconda3\lib\site-packages\tenacity__init.py", line 289, in wrapped_f return self(f, *args, **kw) File "C:\Users\csc-1\miniconda3\lib\site-packages\tenacity__init.py", line 379, in call do = self.iter(retry_state=retry_state) File "C:\Users\csc-1\miniconda3\lib\site-packages\tenacity__init__.py", line 325, in iter raise retry_exc.reraise() File "C:\Users\csc-1\miniconda3\lib\site-packages\tenacity\init__.py", line 158, in reraise raise self.last_attempt.result() File "C:\Users\csc-1\miniconda3\lib\concurrent\futures_base.py", line 451, in result return self.get_result() File "C:\Users\csc-1\miniconda3\lib\concurrent\futures_base.py", line 403, in get_result raise self._exception File "C:\Users\csc-1\miniconda3\lib\site-packages\tenacity\init.py", line 382, in call__ result = fn(*args, *kwargs) File "C:\Users\csc-1\python\ChatPaper\chat_paper.py", line 692, in chat_summary response = openai.ChatCompletion.create( File "C:\Users\csc-1\miniconda3\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create return super().create(args, **kwargs) File "C:\Users\csc-1\miniconda3\lib\site-packages\openai\api_resources\abstract\engine_apiresource.py", line 153, in create response, , api_key = requestor.request( File "C:\Users\csc-1\miniconda3\lib\site-packages\openai\api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "C:\Users\csc-1\miniconda3\lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "C:\Users\csc-1\miniconda3\lib\site-packages\openai\api_requestor.py", line 679, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 4316 tokens. Please reduce the length of the messages.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Users\csc-1\python\ChatPaper\chat_paper.py", line 787, in chat_paper_main(args=parser.parse_args()) File "C:\Users\csc-1\python\ChatPaper\chat_paper.py", line 752, in chat_paper_main reader1.summary_with_chat(paper_list=paper_list) File "C:\Users\csc-1\python\ChatPaper\chat_paper.py", line 465, in summary_with_chat chat_summary_text = self.chat_summary(text=text, summary_prompt_token=summary_prompt_token) File "C:\Users\csc-1\miniconda3\lib\site-packages\tenacity__init.py", line 289, in wrapped_f return self(f, *args, **kw) File "C:\Users\csc-1\miniconda3\lib\site-packages\tenacity__init.py", line 379, in call do = self.iter(retry_state=retry_state) File "C:\Users\csc-1\miniconda3\lib\site-packages\tenacity__init__.py", line 325, in iter raise retry_exc.reraise() File "C:\Users\csc-1\miniconda3\lib\site-packages\tenacity\init__.py", line 158, in reraise raise self.last_attempt.result() File "C:\Users\csc-1\miniconda3\lib\concurrent\futures_base.py", line 451, in result return self.get_result() File "C:\Users\csc-1\miniconda3\lib\concurrent\futures_base.py", line 403, in get_result raise self._exception File "C:\Users\csc-1\miniconda3\lib\site-packages\tenacity\init.py", line 382, in call__ result = fn(*args, *kwargs) File "C:\Users\csc-1\python\ChatPaper\chat_paper.py", line 692, in chat_summary response = openai.ChatCompletion.create( File "C:\Users\csc-1\miniconda3\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create return super().create(args, **kwargs) File "C:\Users\csc-1\miniconda3\lib\site-packages\openai\api_resources\abstract\engine_apiresource.py", line 153, in create response, , api_key = requestor.request( File "C:\Users\csc-1\miniconda3\lib\site-packages\openai\api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "C:\Users\csc-1\miniconda3\lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "C:\Users\csc-1\miniconda3\lib\site-packages\openai\api_requestor.py", line 679, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 13291 tokens. Please reduce the length of the messages.`