GaiZhenbiao / ChuanhuChatGPT

GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.
https://huggingface.co/spaces/JohnSmith9982/ChuanhuChatGPT
GNU General Public License v3.0
15.24k stars 2.3k forks source link

[Bug]: 服务器端文件检索index None问题 #450

Closed xingfanxia closed 1 year ago

xingfanxia commented 1 year ago

这个bug是否已存在现有issue了?

错误表现

2023-03-29 08:25:09,696 [INFO] [chat_func.py:252] 输入为:test
2023-03-29 08:25:09,705 [INFO] [chat_func.py:259] 加载索引中……(这可能需要几分钟)
/root/.local/lib/python3.9/site-packages/langchain/llms/openai.py:169: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
warnings.warn(
/root/.local/lib/python3.9/site-packages/langchain/llms/openai.py:608: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
warnings.warn(
2023-03-29 08:25:11,583 [INFO] [llama_func.py:37] loading file: /tmp/a44d3e46dc0c56ca318d7c9a08539fef04f8e6e4/是的NFT更像是一场骗局 9546ce7713244f059f470d94f1d9b1be.md
__init__() got an unexpected keyword argument 'llm_predictor'
2023-03-29 08:25:11,594 [INFO] [llama_func.py:119] Question: test
Traceback (most recent call last):
File "/root/.local/lib/python3.9/site-packages/gradio/routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "/root/.local/lib/python3.9/site-packages/gradio/blocks.py", line 1075, in process_api
result = await self.call_function(
File "/root/.local/lib/python3.9/site-packages/gradio/blocks.py", line 898, in call_function
prediction = await anyio.to_thread.run_sync(
File "/root/.local/lib/python3.9/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/root/.local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/root/.local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/root/.local/lib/python3.9/site-packages/gradio/utils.py", line 549, in async_iteration
return next(iterator)
File "/app/modules/chat_func.py", line 264, in predict
history, chatbot, status_text = chat_ai(openai_api_key, index, inputs, history, chatbot, reply_language)
File "/app/modules/llama_func.py", line 121, in chat_ai
response, chatbot_display, status_text = ask_ai(
File "/app/modules/llama_func.py", line 171, in ask_ai
response = index.query(
AttributeError: 'NoneType' object has no attribute 'query'

在Railway的docker部署后 上传文件后问问题 index是None 没有对这个repo做任何改动, fresh from latest commit on main

我怀疑是docker中安装的dependency版本可能不一样

__init__() got an unexpected keyword argument 'llm_predictor'

复现操作

在Railway的docker部署后 上传文件后问问题 index是None 没有对这个repo做任何改动, fresh from latest commit on main

错误日志

2023-03-29 08:25:09,696 [INFO] [chat_func.py:252] 输入为:test
2023-03-29 08:25:09,705 [INFO] [chat_func.py:259] 加载索引中……(这可能需要几分钟)
/root/.local/lib/python3.9/site-packages/langchain/llms/openai.py:169: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
warnings.warn(
/root/.local/lib/python3.9/site-packages/langchain/llms/openai.py:608: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
warnings.warn(
2023-03-29 08:25:11,583 [INFO] [llama_func.py:37] loading file: /tmp/a44d3e46dc0c56ca318d7c9a08539fef04f8e6e4/是的NFT更像是一场骗局 9546ce7713244f059f470d94f1d9b1be.md
__init__() got an unexpected keyword argument 'llm_predictor'
2023-03-29 08:25:11,594 [INFO] [llama_func.py:119] Question: test
Traceback (most recent call last):
File "/root/.local/lib/python3.9/site-packages/gradio/routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "/root/.local/lib/python3.9/site-packages/gradio/blocks.py", line 1075, in process_api
result = await self.call_function(
File "/root/.local/lib/python3.9/site-packages/gradio/blocks.py", line 898, in call_function
prediction = await anyio.to_thread.run_sync(
File "/root/.local/lib/python3.9/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/root/.local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/root/.local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/root/.local/lib/python3.9/site-packages/gradio/utils.py", line 549, in async_iteration
return next(iterator)
File "/app/modules/chat_func.py", line 264, in predict
history, chatbot, status_text = chat_ai(openai_api_key, index, inputs, history, chatbot, reply_language)
File "/app/modules/llama_func.py", line 121, in chat_ai
response, chatbot_display, status_text = ask_ai(
File "/app/modules/llama_func.py", line 171, in ask_ai
response = index.query(
AttributeError: 'NoneType' object has no attribute 'query'

运行环境

- OS: Railway Docker
- Browser: Chrome
- Gradio version: 
- Python version: Python 3.9

补充说明

No response

xingfanxia commented 1 year ago

-> https://github.com/jerryjliu/llama_index/releases 5.0似乎有一堆breaking changes

xingfanxia commented 1 year ago

在本地的mac也成功复现了, 在升级0.5.0以后会报错

GaiZhenbiao commented 1 year ago

暂时固定版本号到0.4.40,等会儿我为新版llama_index适配下

xingfanxia commented 1 year ago

@GaiZhenbiao 好的, 基本上就是这个

We now use a ServiceContext container to wrap all these common arguments of passing in custom LLM’s, embedding models, chunk sizes, prompt helpers.

不过好像我改了这个还是有点问题, query 那还是会报错

xingfanxia commented 1 year ago

修复了, 我提交一个PR

xingfanxia commented 1 year ago

@GaiZhenbiao https://github.com/GaiZhenbiao/ChuanhuChatGPT/pull/453