run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
35.39k stars 4.98k forks source link

[Bug]: JSONalyzeQueryEngine predict() missing 1 required positional argument: 'text' #13399

Closed kellyzhai closed 4 weeks ago

kellyzhai commented 4 months ago

Bug Description

Testing the code from this page https://docs.llamaindex.ai/en/stable/examples/query_engine/JSONalyze_query_engine/

from llama_index.core.query_engine import JSONalyzeQueryEngine

json_stats_query_engine = JSONalyzeQueryEngine( list_of_dict=json_list, llm=llama_llm, verbose=True, )

After execute: display( Markdown("> Question: {}".format(questions[4])), Markdown("Answer: {}".format(json_stats_query_engine.query(questions[4]))), )

TypeError Traceback (most recent call last) Cell In[82], line 3 1 display( 2 Markdown("> Question: {}".format(questions[0])), ----> 3 Markdown("Answer: {}".format(json_stats_query_engine.query("What is the average age of the individuals in the dataset?"))), 4 )

File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py:211, in Dispatcher.span..wrapper(func, instance, args, kwargs) 209 self.spanenter(id=id_, bound_args=bound_args, instance=instance) 210 try: --> 211 result = func(*args, **kwargs) 212 except BaseException as e: 213 self.event(SpanDropEvent(spanid=id, err_str=str(e)))

File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/llama_index/core/base/base_query_engine.py:53, in BaseQueryEngine.query(self, str_or_query_bundle) 51 if isinstance(str_or_query_bundle, str): 52 str_or_query_bundle = QueryBundle(str_or_query_bundle) ---> 53 query_result = self._query(str_or_query_bundle) 54 dispatch_event(QueryEndEvent()) 55 return query_result

File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/llama_index/core/query_engine/jsonalyze_query_engine.py:288, in JSONalyzeQueryEngine._query(self, query_bundle) 285 print_text(f"Query: {query}\n", color="green") 287 # Perform the analysis --> 288 sql_query, table_schema, results = self._analyzer( 289 self._list_of_dict, 290 query_bundle, 291 self._llm, 292 table_name=self._table_name, 293 prompt=self._jsonalyze_prompt, 294 sql_parser=self._sql_parser, 295 ) 296 if self._verbose: 297 print_text(f"SQL Query: {sql_query}\n", color="blue")

File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/llama_index/core/query_engine/jsonalyze_query_engine.py:94, in default_jsonalyzer(list_of_dict, query_bundle, llm, table_name, prompt, sql_parser) 92 prompt = prompt or DEFAULT_JSONALYZE_PROMPT 93 # Get the SQL query with text-to-SQL prompt ---> 94 response_str = llm.predict( 95 prompt=prompt, 96 table_name=table_name, 97 table_schema=table_schema, 98 question=query, 99 ) 101 sql_parser = sql_parser or DefaultSQLParser() 103 sql_query = sql_parser.parse_response_to_sql(response_str, query_bundle)

File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/langchain_core/_api/deprecation.py:145, in deprecated..deprecate..warning_emitting_wrapper(*args, *kwargs) 143 warned = True 144 emit_warning() --> 145 return wrapped(args, **kwargs)

TypeError: predict() missing 1 required positional argument: 'text'

Version

%pip install llama-index

Steps to Reproduce

just execute each stops of the page link

Relevant Logs/Tracbacks

No response

dosubot[bot] commented 4 months ago

The error you're encountering, "predict() missing 1 required positional argument: 'text'", indicates that the predict method in the LLM class is being called without the necessary text argument. This method expects a string argument that contains the text to be processed by the model.

To fix this issue, you need to ensure that the predict method is correctly provided with all required arguments, including text. The text argument should contain the formatted prompt with all necessary information for the model to process. Here's an example of how you might format and pass the text argument:

prompt_text = "Your formatted prompt here including any necessary variables"
response_str = self._llm.predict(text=prompt_text)

This is a simplified example. The actual implementation will depend on the specifics of the LLM class and how the predict method is designed to work. You'll need to refer to the documentation or source code of the LLM class to understand exactly how to format the prompt and what information it needs to be included in the text argument.

#### Details **Sources** - [llama-index-core/llama_index/core/query_engine/jsonalyze_query_engine.py](https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/query_engine/jsonalyze_query_engine.py) - [llama-index-core/llama_index/core/llms/llm.py](https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/llms/llm.py) - [docs/docs/examples/query_engine/JSONalyze_query_engine.ipynb](https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/query_engine/JSONalyze_query_engine.ipynb) - [llama-index-legacy/llama_index/legacy/llm_predictor/__init__.py](https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/llm_predictor/__init__.py) - [llama-index-core/llama_index/core/service_context_elements/llm_predictor.py](https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/service_context_elements/llm_predictor.py) - [llama-index-legacy/llama_index/legacy/llm_predictor/base.py](https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/llm_predictor/base.py) - [llama-index-legacy/llama_index/legacy/llm_predictor/loading.py](https://github.com/run-llama/llama_index/blob/main/llama-index-legacy/llama_index/legacy/llm_predictor/loading.py)
**About Dosu** This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a šŸ‘ if this is helpful and šŸ‘Ž if it is irrelevant.

To continue the conversation, mention @dosu.