run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
33.94k stars 4.78k forks source link

[Bug]: AttributeError: 'AzureOpenAIMultiModal' object has no attribute 'predict' #13958

Open GOWTHAM-DORA opened 1 month ago

GOWTHAM-DORA commented 1 month ago

Bug Description

I am using subquestion query engine with gpt-40 model which I initialized using AzureOpenAIMultiModal class and when I query using subquestion query engine , the attribute error is coming AttributeError: 'AzureOpenAIMultiModal' object has no attribute 'predict'

Version

0.10.42

Steps to Reproduce

Define gpt-4o LLM using AzureOpenAIMultiModal Define the query engine tools list pass these query engine tools to subquestion query engine and start querying.

Relevant Logs/Tracbacks

AttributeError                            Traceback (most recent call last)
Cell In[12], line 1
----> 1 response=query_engine.query("my question")
      2 print(response)

File C:\Users\name\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py:198, in Dispatcher.span.<locals>.wrapper(func, instance, args, kwargs)
    194 self.span_enter(
    195     id_=id_, bound_args=bound_args, instance=instance, parent_id=parent_id
    196 )
    197 try:
--> 198     result = func(*args, **kwargs)
    199 except BaseException as e:
    200     self.event(SpanDropEvent(span_id=id_, err_str=str(e)))

File C:\Users\name\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\base\base_query_engine.py:51, in BaseQueryEngine.query(self, str_or_query_bundle)
     49     if isinstance(str_or_query_bundle, str):
     50         str_or_query_bundle = QueryBundle(str_or_query_bundle)
---> 51     query_result = self._query(str_or_query_bundle)
     52 dispatcher.event(
     53     QueryEndEvent(query=str_or_query_bundle, response=query_result)
     54 )
     55 return query_result

Cell In[10], line 116, in SubQuestionQueryEngine._query(self, query_bundle)
    112 def _query(self, query_bundle: QueryBundle) -> RESPONSE_TYPE:
    113     with self.callback_manager.event(
    114         CBEventType.QUERY, payload={EventPayload.QUERY_STR: query_bundle.query_str}
    115     ) as query_event:
--> 116         sub_questions = self._question_gen.generate(self._metadatas, query_bundle)
    118         colors = get_color_mapping([str(i) for i in range(len(sub_questions))])
    120         if self._verbose:

File C:\Users\name\AppData\Local\anaconda3\Lib\site-packages\llama_index\core\question_gen\llm_generators.py:74, in LLMQuestionGenerator.generate(self, tools, query)
     72 tools_str = build_tools_text(tools)
     73 query_str = query.query_str
---> 74 prediction = self._llm.predict(
     75     prompt=self._prompt,
     76     tools_str=tools_str,
     77     query_str=query_str,
     78 )
     80 assert self._prompt.output_parser is not None
     81 parse = self._prompt.output_parser.parse(prediction)

AttributeError: 'AzureOpenAIMultiModal' object has no attribute 'predict'
logan-markewich commented 1 month ago

Hmm, I don't think the AzureOpenAIMultiModal can be used directly in most query engines (except the multi modal query engine)

You can use AzureOpenAI and set the model to be gpt-4o instead

In general, we need to merge these multimodal classes into the main LLM class, too much code duplication, hard to maintain. Its on the roadmap

GOWTHAM-DORA commented 1 month ago

@logan-markewich , okay got it, so you suggested to use AzureOpenAI and set the model to be gpt-4o instead, how can I do this

GOWTHAM-DORA commented 1 month ago

@logan-markewich , I used gpt-4o model with AzureOpenAI , did't get any errors but the subquestion query engine failed to run the subquestions and gave an empty response.