run-llama / mixtral_ollama

43 stars 15 forks source link

Fail to run smoke_test #2

Open zjffdu opened 6 months ago

zjffdu commented 6 months ago

Here's the error I get, what might be wrong?

/Users/jianfezhang/github/llamaindex-tutorial/venv/bin/python /Users/jianfezhang/github/llamaindex-tutorial/ollama_tutorial/1_smoketest.py 
Traceback (most recent call last):
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions
    yield
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpcore/_backends/sync.py", line 126, in read
    return self._sock.recv(max_bytes)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
TimeoutError: timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 67, in map_httpcore_exceptions
    yield
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 231, in handle_request
    resp = self._pool.handle_request(req)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 268, in handle_request
    raise exc
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 251, in handle_request
    response = connection.handle_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 103, in handle_request
    return self._connection.handle_request(request)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 133, in handle_request
    raise exc
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 111, in handle_request
    ) = self._receive_response_headers(**kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 176, in _receive_response_headers
    event = self._receive_event(timeout=timeout)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 212, in _receive_event
    data = self._network_stream.read(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpcore/_backends/sync.py", line 124, in read
    with map_exceptions(exc_map):
  File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/contextlib.py", line 155, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ReadTimeout: timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/jianfezhang/github/llamaindex-tutorial/ollama_tutorial/1_smoketest.py", line 6, in <module>
    response = llm.complete("Who is Laurie Voss?")
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/llama_index/llms/base.py", line 223, in wrapped_llm_predict
    f_return_val = f(_self, *args, **kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/llama_index/llms/ollama.py", line 178, in complete
    response = client.post(
               ^^^^^^^^^^^^
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpx/_client.py", line 1146, in post
    return self.request(
           ^^^^^^^^^^^^^
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpx/_client.py", line 828, in request
    return self.send(request, auth=auth, follow_redirects=follow_redirects)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpx/_client.py", line 915, in send
    response = self._send_handling_auth(
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpx/_client.py", line 943, in _send_handling_auth
    response = self._send_handling_redirects(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpx/_client.py", line 980, in _send_handling_redirects
    response = self._send_single_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpx/_client.py", line 1016, in _send_single_request
    response = transport.handle_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 230, in handle_request
    with map_httpcore_exceptions():
  File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/contextlib.py", line 155, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/Users/jianfezhang/github/llamaindex-tutorial/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 84, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ReadTimeout: timed out

Process finished with exit code 1
fjij commented 6 months ago

I'm getting the same issue.

fjij commented 6 months ago

Here's a workaround I came up with:

It requires LangChain though.

from langchain.llms import Ollama
from llama_index.llms import LangChainLLM

llm = LangChainLLM(llm=Ollama(model="llama2"))
response = llm.complete("What is the history of LEGO?")
print(response)