run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.3k stars 5.18k forks source link

[Bug]: Outdate guidance codes #9797

Closed amindadgar closed 6 months ago

amindadgar commented 9 months ago

Bug Description

When trying the GuidanceQuestionGenerator with GuidanceOpenAI I see it cannot run with the guidance==0.1.10 package. Looking at their repository it seems they have huge changes from version 0.0.64. Downgrading the package to guidance==0.0.64, when trying the example in the documentation (with some other data I own) https://docs.llamaindex.ai/en/stable/examples/output_parsing/guidance_sub_question.html#construct-sub-question-query-engine-and-run-some-queries

When trying to query the engine

I'm getting the error below

File "/Users/myuser/projects/sample-proj/sample-proj-env/lib/python3.11/site-packages/guidance/llms/_openai.py", line 348, in _library_call
    prev_base = openai.api_base
                ^^^^^^^^^^^^^^^
AttributeError: module 'openai' has no attribute 'api_base'

Version

0.9.24

Steps to Reproduce

  1. Prepare a query_engine using the data I have
  2. prepare the ToolMetadata (a list)
  3. prepare the QueryEngineTool (a list)
  4. Initialize the GuidanceQuestionGenerator using GuidanceOpenAI as guidance_llm
  5. Initilize the SubQuestionQueryEngine from the query engine tool made in step 3 and question generator made in step 4
  6. Create the QueryBundle using the embedding model I have
  7. Pass the user query and the embedding within the subquery engine (Error raised here)

some pseudo-codes

    query_engine_tools: list[QueryEngineTool] = []
    tools: list[ToolMetadata] = []

    # in my initial code I have one query engine
    my_query_engine = # prepare query engine function
    tool_metadata = # prepare the tool metadata of the query engine

    # appending the tools I have
    tools.append(tool_metadata)
    query_engine_tools.append(
            QueryEngineTool(
                query_engine=my_query_engine,
                metadata=tool_metadata,
            )
    )
    question_gen = GuidanceQuestionGenerator.from_defaults(
        guidance_llm=GuidanceOpenAI("text-davinci-003"), verbose=False
    )
    s_engine = SubQuestionQueryEngine.from_defaults(
        question_gen=question_gen,
        query_engine_tools=query_engine_tools,
    )
    query_bundle = QueryBundle(
        query_str=query, embedding=CohereEmbedding().get_text_embedding(text=query)
    )

    # error was raised here
    reponse = s_engine.query(query_bundle)

Relevant Logs/Tracbacks

Traceback (most recent call last):
  File "/Users/myuser/projects/sample-proj-env/lib/python3.11/site-packages/guidance/llms/_openai.py", line 665, in __call__
    out = await self.llm.caller(**call_args)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/myuser/projects/sample-proj-env/lib/python3.11/site-packages/guidance/llms/_openai.py", line 348, in _library_call
    prev_base = openai.api_base
                ^^^^^^^^^^^^^^^
AttributeError: module 'openai' has no attribute 'api_base'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/myuser/projects/sample-proj-env/lib/python3.11/site-packages/guidance/_program_executor.py", line 109, in run
    await self.visit(self.parse_tree, VariableStack([self.program._variables], self))
  File "/Users/myuser/projects/sample-proj-env/lib/python3.11/site-packages/guidance/_program_executor.py", line 559, in visit
    visited_children.append(await self.visit(child, variable_stack, inner_next_node, inner_next_next_node, inner_prev_node, node, parent_node))
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/myuser/projects/sample-proj-env/lib/python3.11/site-packages/guidance/_program_executor.py", line 524, in visit
    command_output = await command_function(*positional_args, **named_args)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/myuser/projects/sample-proj-env/lib/python3.11/site-packages/guidance/library/_geneach.py", line 119, in geneach
    new_content += await parser.visit(
                   ^^^^^^^^^^^^^^^^^^^
  File "/Users/myuser/projects/sample-proj-env/lib/python3.11/site-packages/guidance/_program_executor.py", line 559, in visit
    visited_children.append(await self.visit(child, variable_stack, inner_next_node, inner_next_next_node, inner_prev_node, node, parent_node))
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/myuser/projects/sample-proj-env/lib/python3.11/site-packages/guidance/_program_executor.py", line 559, in visit
    visited_children.append(await self.visit(child, variable_stack, inner_next_node, inner_next_next_node, inner_prev_node, node, parent_node))
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/myuser/projects/sample-proj-env/lib/python3.11/site-packages/guidance/_program_executor.py", line 266, in visit
    visited_children = [await self.visit(child, variable_stack, next_node, next_next_node, prev_node, node, parent_node) for child in node]
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/myuser/projects/sample-proj-env/lib/python3.11/site-packages/guidance/_program_executor.py", line 266, in <listcomp>
    visited_children = [await self.visit(child, variable_stack, next_node, next_next_node, prev_node, node, parent_node) for child in node]
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/myuser/projects/sample-proj-env/lib/python3.11/site-packages/guidance/_program_executor.py", line 379, in visit
    command_output = await command_function(*positional_args, **named_args)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/myuser/projects/sample-proj-env/lib/python3.11/site-packages/guidance/library/_gen.py", line 140, in gen
    gen_obj = await parser.llm_session(
              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/myuser/projects/sample-proj-env/lib/python3.11/site-packages/guidance/llms/_openai.py", line 667, in __call__
    except openai.error.RateLimitError:
           ^^^^^^^^^^^^
AttributeError: module 'openai' has no attribute 'error'

Error in program:  module 'openai' has no attribute 'error'
dosubot[bot] commented 9 months ago

🤖

Hello @amindadgar! :wave:

Just wanted to say hello and introduce myself. I'm Dosu, a friendly bot here to help with bugs and questions about LlamaIndex, or even help you become a contributor, all while we wait for a human maintainer to join us. Don't hesitate to ask if you need anything!

Oh, and I may throw in a few jokes here and there, just to keep things lively! :robot: :tada:

Based on the error logs you've provided and the context from a similar issue in the LlamaIndex repository, it seems like the problem is due to a version mismatch between the openai package and the guidance package. The guidance==0.1.10 package might be expecting a different version of the openai package where the api_base attribute is available.

In the provided context, the resolve_openai_credentials function tries to access openai.base_url:

final_api_base = api_base or openai.base_url or DEFAULT_OPENAI_API_BASE

However, in the version of the openai package that you're using, there might not be a base_url attribute, hence the AttributeError.

To resolve this issue, you should check the required openai package version for guidance==0.1.10 and install that specific version. If the required version is not specified, you could try updating the openai package to the latest version using pip:

pip install --upgrade openai

If the problem persists, it might be a compatibility issue between the guidance package and the openai package.

As for the LlamaIndex version, a similar issue was reported in the LlamaIndex repository: [Bug]: Guidance for Sub-Question Query Engine. The suggested solution was to use llama-index==0.8.62 or less if you want to use guidance.

Please try these solutions and let me know if the issue persists.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

amindadgar commented 9 months ago

Thanks llama-index community for implementing the bot (It's very cool). In this situation I investigated the guidance and llama-index package more and I concluded the following

The best way to add support for the new version of guidance is to update the codes of llama-index library (the parts using guidance) to the newer version guidance==0.1.10. Is that going to be possible for the community of devs working on this library?

dosubot[bot] commented 9 months ago

@logan-markewich, this issue requires your expertise in navigating the compatibility between llama-index and the newer version of the guidance package.

logan-markewich commented 9 months ago

@amindadgar yea we can probably update it at some point this week. Happy to let you do it as well if you have more time :)

amindadgar commented 9 months ago

@logan-markewich Sure, I'll start doing it the next day! If there's a standard to follow, please don't hesitate to let me know :)

dosubot[bot] commented 6 months ago

Hi, @amindadgar,

I'm helping the LlamaIndex team manage their backlog and am marking this issue as stale. The issue involved the guidance package version 0.9.24 being incompatible with the GuidanceOpenAI module, resulting in an AttributeError when querying the engine. It was found that the older guidance package (guidance==0.0.64) is fully compatible with openai<1.0.0, while the newer version (guidance==0.1.10) is compatible with openai>=1.0. The proposed solution was to update the parts of the llama-index library using guidance to the newer version guidance==0.1.10. The issue has been resolved by updating the library to support the new version of guidance, and the community has agreed to this update.

Is this issue still relevant to the latest version of the LlamaIndex repository? If it is, please let the LlamaIndex team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

For further details, refer to the issue on GitHub: Issue Link

Thank you!