run-llama / llama-hub

A library of data loaders for LLMs made by the community -- to be used with LlamaIndex and/or LangChain
https://llamahub.ai/
MIT License
3.43k stars 727 forks source link

[Bug]: AttributeError: 'OpenAI' object has no attribute 'apredict' #778

Open rawwerks opened 7 months ago

rawwerks commented 7 months ago

Bug Description

I'm running https://github.com/run-llama/llama-hub/blob/main/llama_hub/llama_packs/dense_x_retrieval/dense_x_retrieval.ipynb locally, following along with the notebook. i soon as i hit this code block:

from llama_index.llms import OpenAI
from llama_index.text_splitter import SentenceSplitter

dense_pack = DenseXRetrievalPack(
  documents, 
  proposition_llm=OpenAI(model="gpt-3.5-turbo", max_tokens=750),
  query_llm=OpenAI(model="gpt-3.5-turbo", max_tokens=256),
  text_splitter=SentenceSplitter(chunk_size=1024)
)
dense_query_engine = dense_pack.query_engine

i get this error:

AttributeError: 'OpenAI' object has no attribute 'apredict'

Version

https://pypi.org/project/llama-index/0.9.15.post2/

Steps to Reproduce

try to run https://github.com/run-llama/llama-hub/blob/main/llama_hub/llama_packs/dense_x_retrieval/dense_x_retrieval.ipynb

Relevant Logs/Tracbacks

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
/Users/my/path/dense_x_retrieval.ipynb Cell 9 line 4
      1 from llama_index.llms import OpenAI
      2 from llama_index.text_splitter import SentenceSplitter
----> 4 dense_pack = DenseXRetrievalPack(
      5   documents, 
      6   proposition_llm=OpenAI(model="gpt-3.5-turbo", max_tokens=750),
      7   query_llm=OpenAI(model="gpt-3.5-turbo", max_tokens=256),
      8   text_splitter=SentenceSplitter(chunk_size=1024)
      9 )
     10 dense_query_engine = dense_pack.query_engine

File ~/Documents/GitHub/llama-index-library-raw/dense_pack/base.py:88, in DenseXRetrievalPack.__init__(self, documents, proposition_llm, query_llm, embed_model, text_splitter, similarity_top_k)
     85 embed_model = embed_model or OpenAIEmbedding(embed_batch_size=128)
     87 nodes = text_splitter.get_nodes_from_documents(documents)
---> 88 sub_nodes = self._gen_propositions(nodes)
     90 all_nodes = nodes + sub_nodes
     91 all_nodes_dict = {n.node_id: n for n in all_nodes}

File ~/Documents/GitHub/llama-index-library-raw/dense_pack/base.py:163, in DenseXRetrievalPack._gen_propositions(self, nodes)
    161 def _gen_propositions(self, nodes: List[TextNode]) -> List[TextNode]:
    162     """Get propositions."""
--> 163     sub_nodes = asyncio.run(
    164         run_jobs(
    165             [self._aget_proposition(node) for node in nodes],
    166             show_progress=True,
    167             workers=8,
    168         )
    169     )
    171     # Flatten list
    172     return [node for sub_node in sub_nodes for node in sub_node]

File ~/Library/Python/3.12/lib/python/site-packages/nest_asyncio.py:31, in _patch_asyncio.<locals>.run(main, debug)
     29 task = asyncio.ensure_future(main)
     30 try:
---> 31     return loop.run_until_complete(task)
     32 finally:
     33     if not task.done():

File ~/Library/Python/3.12/lib/python/site-packages/nest_asyncio.py:99, in _patch_loop.<locals>.run_until_complete(self, future)
     96 if not f.done():
     97     raise RuntimeError(
     98         'Event loop stopped before Future completed.')
---> 99 return f.result()

File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/futures.py:203, in Future.result(self)
    201 self.__log_traceback = False
    202 if self._exception is not None:
--> 203     raise self._exception.with_traceback(self._exception_tb)
    204 return self._result

File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/tasks.py:304, in Task.__step_run_and_handle_result(***failed resolving arguments***)
    300 try:
    301     if exc is None:
    302         # We use the `send` method directly, because coroutines
    303         # don't have `__iter__` and `__next__` methods.
--> 304         result = coro.send(None)
    305     else:
    306         result = coro.throw(exc)

File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/llama_index/async_utils.py:110, in run_jobs(jobs, show_progress, workers)
    106         return await job
    108 pool_jobs = [worker(job) for job in jobs]
--> 110 return await asyncio_mod.gather(*pool_jobs)

File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/tqdm/asyncio.py:79, in tqdm_asyncio.gather(cls, loop, timeout, total, *fs, **tqdm_kwargs)
     76     return i, await f
     78 ifs = [wrap_awaitable(i, f) for i, f in enumerate(fs)]
---> 79 res = [await f for f in cls.as_completed(ifs, loop=loop, timeout=timeout,
     80                                          total=total, **tqdm_kwargs)]
     81 return [i for _, i in sorted(res)]

File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/tasks.py:621, in as_completed.<locals>._wait_for_one()
    618 if f is None:
    619     # Dummy value from _on_timeout().
    620     raise exceptions.TimeoutError
--> 621 return f.result()

File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/futures.py:203, in Future.result(self)
    201 self.__log_traceback = False
    202 if self._exception is not None:
--> 203     raise self._exception.with_traceback(self._exception_tb)
    204 return self._result

File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/tasks.py:304, in Task.__step_run_and_handle_result(***failed resolving arguments***)
    300 try:
    301     if exc is None:
    302         # We use the `send` method directly, because coroutines
    303         # don't have `__iter__` and `__next__` methods.
--> 304         result = coro.send(None)
    305     else:
    306         result = coro.throw(exc)

File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/tqdm/asyncio.py:76, in tqdm_asyncio.gather.<locals>.wrap_awaitable(i, f)
     75 async def wrap_awaitable(i, f):
---> 76     return i, await f

File /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/llama_index/async_utils.py:106, in run_jobs.<locals>.worker(job)
    104 async def worker(job: Coroutine) -> Any:
    105     async with semaphore:
--> 106         return await job

File ~/Documents/GitHub/llama-index-library-raw/dense_pack/base.py:119, in DenseXRetrievalPack._aget_proposition(self, node)
    117 async def _aget_proposition(self, node: TextNode) -> List[TextNode]:
    118     """Get proposition."""
--> 119     inital_output = await self._proposition_llm.apredict(
    120         PROPOSITIONS_PROMPT, node_text=node.text
    121     )
    122     outputs = inital_output.split("\n")
    124     all_propositions = []

AttributeError: 'OpenAI' object has no attribute 'apredict'
logan-markewich commented 7 months ago

@rawwerks I think you need to update your llama-index version

Alternatively, you can download and edit the llama-pack and wrap the LLM with the (deprecated) llm-predictor class for that section of code

anoopshrma commented 6 months ago

Hey @rawwerks ,

Let me know if this fixed your issue, would close this issue then.

Thanks