wxywb / history_rag

841 stars 109 forks source link

[bug]ValueError:Cannot use llm_chat_callback on an instance without a callback_manager attribute. #40

Open CapitalWilliam opened 7 months ago

CapitalWilliam commented 7 months ago

问题描述

按照项目中的 Windows 说明按步骤执行过程中,遇到了问题。

环境信息

执行步骤列表

  1. clone项目到本地。
  2. 使用 .venv 创建虚拟环境——Python 版本是 3.9.12。
  3. pip——过程中,会自动安装最新版本的 llama-index(版本是 0.10.x)。
  4. pip——我手动升级为 0.9.39。
  5. 首次运行前——在 VSCode 下遇到了模块与文件名重名的情况,需要修改 3 处才能运行 python cli.py。修改的地方分别是:

    • from llama_index.core.query_pipeline.components import (...) (components模块下的components.py无法识别)

    • from llama_index.core.llms.llm import LLM(llm 模块下的llm.py无法识别)

    • from llama_index.core.llms.llm import ChatMessage, MessageRole

    • from llama_index.core import global_handler(llama_index.core模块下没有global_handler)

  6. build txt 过程没有遇到问题。
  7. 在执行 ask -d 过程中可以正确返回结果,但随后产生了 ValueError。
Traceback (most recent call last):
  File "C:\Users\A\VscodeProjects\history_rag\cli.py", line 120, in <module>
    cli.run()
  File "C:\Users\A\VscodeProjects\history_rag\cli.py", line 53, in run
    self.parse_input(command_text)
  File "C:\Users\A\VscodeProjects\history_rag\cli.py", line 74, in parse_input
    self.question_answer()
  File "C:\Users\A\VscodeProjects\history_rag\cli.py", line 109, in question_answer
    self.query(question)
  File "C:\Users\A\VscodeProjects\history_rag\cli.py", line 86, in query
    ans = self._executor.query(question)
  File "C:\Users\A\VscodeProjects\history_rag\executor.py", line 237, in query
    response = self.query_engine.query(question)
  File "c:\Users\A\VscodeProjects\history_rag\.venv\lib\site-packages\llama_index\core\base_query_engine.py", line 40, in query
    return self._query(str_or_query_bundle)
  File "c:\Users\A\VscodeProjects\history_rag\.venv\lib\site-packages\llama_index\query_engine\retriever_query_engine.py", line 172, in _query 
    response = self._response_synthesizer.synthesize(
  File "c:\Users\A\VscodeProjects\history_rag\.venv\lib\site-packages\llama_index\response_synthesizers\base.py", line 168, in synthesize      
    response_str = self.get_response(
  File "c:\Users\A\VscodeProjects\history_rag\.venv\lib\site-packages\llama_index\response_synthesizers\compact_and_refine.py", line 38, in get_response
    return super().get_response(
  File "c:\Users\A\VscodeProjects\history_rag\.venv\lib\site-packages\llama_index\response_synthesizers\refine.py", line 146, in get_response  
    response = self._give_response_single(
  File "c:\Users\A\VscodeProjects\history_rag\.venv\lib\site-packages\llama_index\response_synthesizers\refine.py", line 202, in _give_response_single
    program(
  File "c:\Users\A\VscodeProjects\history_rag\.venv\lib\site-packages\llama_index\response_synthesizers\refine.py", line 64, in __call__       
    answer = self._llm.predict(
  File "c:\Users\A\VscodeProjects\history_rag\.venv\lib\site-packages\llama_index\core\llms\llm_.py", line 239, in predict
    chat_response = self.chat(messages)
  File "c:\Users\A\VscodeProjects\history_rag\.venv\lib\site-packages\llama_index\core\llms\callbacks.py", line 84, in wrapped_llm_chat        
    with wrapper_logic(_self) as callback_manager:
  File "C:\Users\A\AppData\Local\Programs\Python\Python39\lib\contextlib.py", line 119, in __enter__
    return next(self.gen)
  File "c:\Users\A\VscodeProjects\history_rag\.venv\lib\site-packages\llama_index\core\llms\callbacks.py", line 30, in wrapper_logic
    raise ValueError(
ValueError: Cannot use llm_chat_callback on an instance without a callback_manager attribute.
wxywb commented 7 months ago

https://github.com/wxywb/history_rag/blob/fa38cf01d72bc4113a22a38a4f4e4dac9e47ce41/executor.py#L15 https://github.com/wxywb/history_rag/blob/fa38cf01d72bc4113a22a38a4f4e4dac9e47ce41/executor.py#L21

我先确认一下你遇到的问题,你的意思是说executor.py中的这几行无法运行吗,并且在llama-index 0.9.39的情况下。

CapitalWilliam commented 7 months ago

https://github.com/wxywb/history_rag/blob/fa38cf01d72bc4113a22a38a4f4e4dac9e47ce41/executor.py#L15

https://github.com/wxywb/history_rag/blob/fa38cf01d72bc4113a22a38a4f4e4dac9e47ce41/executor.py#L21

我先确认一下你遇到的问题,你的意思是说executor.py中的这几行无法运行吗,并且在llama-index 0.9.39的情况下。

我遇到的情况应该是两部分,一个是 项目在vscode下遇到的"模块名和py文件名相同时的冲突",这部分可能是vscode的python extension或者是其他部分有关系,我准备稍后用pycharm试试,如果没有问题,我再更新. 【更新】在pycharm尝试了一次,两部分情况又都遇到了

第二个就是我修改了以下几处的,替换同名 或者是参考llama-index仓库里其他版本的类似写法(global_handler)

from llama_index.core.query_pipeline.components import (...)(components模块下的components.py无法识别)

from llama_index.core.llms.llm import LLM(llm 模块下的llm.py无法识别)

from llama_index.core.llms.llm import ChatMessage, MessageRole

from llama_index.core import global_handler(llama_index.core模块下没有global_handler)

以上操作后,可以顺利执行: 1 cmd窗口执行python cli.py 2 项目窗口执行milvus 3 项目窗口执行build *.txt 4 项目窗口输入ask -dask 5 项目窗口输入华雄是被谁杀死的

上述步骤后,我就会遇到标题里的ValueError报错.

我也尝试通过给以上逐行代码添加断点,来尝试捕捉,但我在vscode 里面打开debug模式后,第一次触发断点会是在 Raised Exceptions情况下,出现在 executor.py文件中

具体的Exception位置如下: https://github.com/wxywb/history_rag/blob/fa38cf01d72bc4113a22a38a4f4e4dac9e47ce41/executor.py#L237

Exception has occurred: ValueError
Cannot use llm_chat_callback on an instance without a callback_manager attribute.
  File "C:\Users\A\VscodeProjects\history_rag\executor.py", line 237, in query
    response = self.query_engine.query(question)
  File "C:\Users\A\VscodeProjects\history_rag\cli.py", line 86, in query
    ans = self._executor.query(question)
  File "C:\Users\A\VscodeProjects\history_rag\cli.py", line 109, in question_answer
    self.query(question)
  File "C:\Users\A\VscodeProjects\history_rag\cli.py", line 74, in parse_input
    self.question_answer()
  File "C:\Users\A\VscodeProjects\history_rag\cli.py", line 53, in run
    self.parse_input(command_text)
  File "C:\Users\A\VscodeProjects\history_rag\cli.py", line 120, in <module>
    cli.run()
ValueError: Cannot use llm_chat_callback on an instance without a callback_manager attribute.
wxywb commented 7 months ago

你再确认一下llamaindex的版本(pip list |grep llama),clone一个原版的 试一下直接使用cmdline而不是ide(vscode, pycharm...etc)

CapitalWilliam commented 7 months ago

你再确认一下llamaindex的版本(pip list |grep llama),clone一个原版的 试一下直接使用cmdline而不是ide(vscode, pycharm...etc)

尝试了一下使用windows11自带的terminal,成功实现了调用。

看来应该是我自己电脑上ide配置的问题,后续如果我解决了我再继续更新办法

IshitaArora-246 commented 7 months ago

I am using llama-index version 0.10.0 and getting the same error. The command pip list | grep llama also didn't work.

CapitalWilliam commented 7 months ago

I am using llama-index version 0.10.0 and getting the same error. The command pip list | grep llama also didn't work.

Hi Ishita. About this, I have searched answer and it seems like that the llama-index library have been rewritten after version 0.10.x.

I believe llama-index works good at version>=0.9.4x

You can run pip install llama-index==0.9.39 --upgrade and (if you are facing same problems I mentioned,) run history_rag via cmd instead of some IDEs .