run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
34.76k stars 4.9k forks source link

AttributeError: 'LLMPredictor' object has no attribute 'get_text_from_nodes' #3353

Closed jma7889 closed 1 year ago

jma7889 commented 1 year ago

When using custom service context, i got following errors. With the latest llama-index 0.6.6 and langchain 0.0.168 If service_context is not used, it works.

from llama_index import GPTTreeIndex
...

default_prompt_helper = PromptHelper(max_input_size, num_output, max_chunk_overlap)
default_predictor = LLMPredictor()

service_context = ServiceContext.from_defaults(llm_predictor=default_prompt_helper, prompt_helper=default_prompt_helper)
index = GPTTreeIndex.from_documents(documents, service_context=service_context)

error messages

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[8], line 4
      1 from llama_index import GPTTreeIndex
      3 # Custom model config
----> 4 index = GPTTreeIndex.from_documents(documents, service_context=service_context)
      6 # default model config
      7 #index = GPTTreeIndex.from_documents(documents)
      8 
      9 # save index to file
     10 #index.storage_context.persist()

File [~/miniconda3/envs/rfp-annotation/lib/python3.10/site-packages/llama_index/indices/base.py:93](https://file+.vscode-resource.vscode-cdn.net/Users/jma/dev/airpunchai/llm-kb/llm-kb/src/gpt_index_poc/~/miniconda3/envs/rfp-annotation/lib/python3.10/site-packages/llama_index/indices/base.py:93), in BaseGPTIndex.from_documents(cls, documents, storage_context, service_context, **kwargs)
     89     docstore.set_document_hash(doc.get_doc_id(), doc.get_doc_hash())
     91 nodes = service_context.node_parser.get_nodes_from_documents(documents)
---> 93 return cls(
     94     nodes=nodes,
     95     storage_context=storage_context,
     96     service_context=service_context,
     97     **kwargs,
     98 )

File [~/miniconda3/envs/rfp-annotation/lib/python3.10/site-packages/llama_index/indices/tree/base.py:77](https://file+.vscode-resource.vscode-cdn.net/Users/jma/dev/airpunchai/llm-kb/llm-kb/src/gpt_index_poc/~/miniconda3/envs/rfp-annotation/lib/python3.10/site-packages/llama_index/indices/tree/base.py:77), in GPTTreeIndex.__init__(self, nodes, index_struct, service_context, summary_template, insert_prompt, num_children, build_tree, use_async, **kwargs)
     75 self.build_tree = build_tree
     76 self._use_async = use_async
...
     90     )
     91     indices.append(i)
     92     cur_nodes_chunks.append(cur_nodes_chunk)

AttributeError: 'LLMPredictor' object has no attribute 'get_text_from_nodes'
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?5d0f3945-a4a9-4bba-a308-6a66247b7dee) or open in a [text editor](command:workbench.action.openLargeOutput?5d0f3945-a4a9-4bba-a308-6a66247b7dee). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
logan-markewich commented 1 year ago

It looks like you set the llm_predictor to be the prompt helper (I think this is a mistake) 👀

jma7889 commented 1 year ago

My example was wrong, so embarrassing. I think the real issue was I need to use ChatOpenAI instead of OpenAI from langchain for LLMPredictor. So the code for that part is changed to following and worked. Close the ticket.

from llama_index import (
    LLMPredictor,
    ServiceContext,
    PromptHelper
)
from langchain.chat_models import ChatOpenAI

# define LLM default is text-davinci-003
default_predictor = LLMPredictor()

davinci_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.01, model_name="text-davinci-002"))
gpt35_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.01, model_name="gpt-3.5-turbo"))
gpt4_32_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.01, model_name="gpt-4-32k"))
gpt4_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.01, model_name="gpt-4"))