wzdavid / ThinkRAG

A LLM RAG system runs on your laptop. 大模型检索增强生成系统,可以轻松部署在笔记本电脑上,实现本地知识库智能问答。
https://thinkrag.streamlit.app/
MIT License
67 stars 11 forks source link

上传文件后回答问题失败 #7

Open tik-seven opened 6 days ago

tik-seven commented 6 days ago

我在本地部署成功后,使用ollama的llama3模型,配置成功。上传了一份docx文件成功,然后发送问题,后台报错:

`Traceback (most recent call last): File "D:\BaiduSyncdisk\wk\ThinkRAG-main\think\Lib\site-packages\streamlit\runtime\scriptrunner\exec_code.py", line 88, in exec_func_with_error_handling result = func() ^^^^^^ File "D:\BaiduSyncdisk\wk\ThinkRAG-main\think\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 579, in code_to_exec exec(code, module.dict) File "D:\BaiduSyncdisk\wk\ThinkRAG-main\app.py", line 48, in pg.run() File "D:\BaiduSyncdisk\wk\ThinkRAG-main\think\Lib\site-packages\streamlit\navigation\page.py", line 303, in run exec(code, module.dict) File "D:\BaiduSyncdisk\wk\ThinkRAG-main\frontend\Document_QA.py", line 128, in main() File "D:\BaiduSyncdisk\wk\ThinkRAG-main\frontend\Document_QA.py", line 118, in main chatbox() File "D:\BaiduSyncdisk\wk\ThinkRAG-main\frontend\Document_QA.py", line 71, in chatbox response_text = st.write_stream(response.response_gen) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\BaiduSyncdisk\wk\ThinkRAG-main\think\Lib\site-packages\streamlit\runtime\metrics_util.py", line 410, in wrapped_func result = non_optional_func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\BaiduSyncdisk\wk\ThinkRAG-main\think\Lib\site-packages\streamlit\elements\write.py", line 174, in write_stream for chunk in stream: # type: ignore File "D:\BaiduSyncdisk\wk\ThinkRAG-main\think\Lib\site-packages\llama_index\core\llms\llm.py", line 127, in gen for response in chat_response_gen: File "D:\BaiduSyncdisk\wk\ThinkRAG-main\think\Lib\site-packages\llama_index\core\llms\callbacks.py", line 186, in wrapped_gen for x in f_return_val: File "D:\BaiduSyncdisk\wk\ThinkRAG-main\think\Lib\site-packages\llama_index\llms\ollama\base.py", line 333, in gen token_counts = self._get_response_token_counts(r) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\BaiduSyncdisk\wk\ThinkRAG-main\think\Lib\site-packages\llama_index\llms\ollama\base.py", line 198, in _get_response_token_counts total_tokens = prompt_tokens + completion_tokens


TypeError: unsupported operand type(s) for +: 'NoneType' and 'NoneType'`
wzdavid commented 6 days ago

感谢反馈,我们看一下