brumik / ollama-obsidian-indexer

ollama-obsidian-indexer
63 stars 11 forks source link

TypeError: unsupported operand type(s) for *: 'NoneType' and 'float' #6

Closed replete closed 3 months ago

replete commented 7 months ago

Followed the instructions, but this happens after initiating the Ollama Chat command:

Screenshot 2024-02-21 at 23 58 24

127.0.0.1 - - [21/Feb/2024 23:55:41] "POST / HTTP/1.1" 500 -
Traceback (most recent call last):
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/flask/app.py", line 1478, in __call__
    return self.wsgi_app(environ, start_response)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/flask/app.py", line 1458, in wsgi_app
    response = self.handle_exception(e)
               ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/flask/app.py", line 1455, in wsgi_app
    response = self.full_dispatch_request()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/flask/app.py", line 869, in full_dispatch_request
    rv = self.handle_user_exception(e)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/flask/app.py", line 867, in full_dispatch_request
    rv = self.dispatch_request()
         ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/flask/app.py", line 852, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/server.py", line 45, in index
    response = query(user_query)
               ^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/llm.py", line 148, in query
    response = query_engine.query(query)
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/core/base_query_engine.py", line 30, in query
    return self._query(str_or_query_bundle)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/query_engine/retriever_query_engine.py", line 170, in _query
    nodes = self.retrieve(query_bundle)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/query_engine/retriever_query_engine.py", line 126, in retrieve
    nodes = self._retriever.retrieve(query_bundle)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/core/base_retriever.py", line 54, in retrieve
    nodes = self._retrieve(query_bundle)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/indices/vector_store/retrievers/retriever.py", line 88, in _retrieve
    return self._get_nodes_with_embeddings(query_bundle)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/indices/vector_store/retrievers/retriever.py", line 164, in _get_nodes_with_embeddings
    query_result = self._vector_store.query(query, **self._kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/vector_stores/simple.py", line 274, in query
    top_similarities, top_ids = get_top_k_embeddings(
                                ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/indices/query/embedding_utils.py", line 31, in get_top_k_embeddings
    similarity = similarity_fn(query_embedding_np, emb)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/embeddings/base.py", line 48, in similarity
    product = np.dot(embedding1, embedding2)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: unsupported operand type(s) for *: 'NoneType' and 'float'

.env

APP_DEVELOPMENT=0
APP_PORT=5000
LLM_MODEL="gemma:2b"
LLM_TEMPERATURE=0.1
LLL_PROMPT_TEMPLATE="
<s>[INST]
You are a helpful assistant, you will use the provided context to answer user questions.
Read the given context before answering questions and think step by step. If you can not answer a user question based on
the provided context, inform the user. Do not use any other information for answering user. Provide a detailed answer to the question.

Context: {context_str}
User: {query_str}
[/INST]
"
INDEXES_PERSIST_DIR="./storage"
NOTES_BASE_PATH="/Users/phil/x/x"

This is with the gemma:2b model.

Any ideas?

Environment:

replete commented 7 months ago

I installed mistral with ollama, confirmed it was working in CLI, reconfigured .env with this model and this happened:

Screenshot 2024-02-22 at 00 43 36 Screenshot 2024-02-22 at 00 45 46

127.0.0.1 - - [22/Feb/2024 00:40:05] "POST / HTTP/1.1" 500 -
Traceback (most recent call last):
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
    yield
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 233, in handle_request
    resp = self._pool.handle_request(req)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
    raise exc from None
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
    response = connection.handle_request(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 101, in handle_request
    return self._connection.handle_request(request)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 143, in handle_request
    raise exc
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 113, in handle_request
    ) = self._receive_response_headers(**kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 186, in _receive_response_headers
    event = self._receive_event(timeout=timeout)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 224, in _receive_event
    data = self._network_stream.read(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpcore/_backends/sync.py", line 124, in read
    with map_exceptions(exc_map):
  File "/usr/local/Cellar/python@3.11/3.11.7_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ReadTimeout: timed out

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/flask/app.py", line 1478, in __call__
    return self.wsgi_app(environ, start_response)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/flask/app.py", line 1458, in wsgi_app
    response = self.handle_exception(e)
               ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/flask/app.py", line 1455, in wsgi_app
    response = self.full_dispatch_request()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/flask/app.py", line 869, in full_dispatch_request
    rv = self.handle_user_exception(e)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/flask/app.py", line 867, in full_dispatch_request
    rv = self.dispatch_request()
         ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/flask/app.py", line 852, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/server.py", line 45, in index
    response = query(user_query)
               ^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/llm.py", line 148, in query
    response = query_engine.query(query)
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/core/base_query_engine.py", line 30, in query
    return self._query(str_or_query_bundle)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/query_engine/retriever_query_engine.py", line 171, in _query
    response = self._response_synthesizer.synthesize(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/response_synthesizers/base.py", line 146, in synthesize
    response_str = self.get_response(
                   ^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/response_synthesizers/compact_and_refine.py", line 38, in get_response
    return super().get_response(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/response_synthesizers/refine.py", line 146, in get_response
    response = self._give_response_single(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/response_synthesizers/refine.py", line 202, in _give_response_single
    program(
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/response_synthesizers/refine.py", line 64, in __call__
    answer = self._llm.predict(
             ^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/llms/llm.py", line 220, in predict
    chat_response = self.chat(messages)
                    ^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/llms/base.py", line 97, in wrapped_llm_chat
    f_return_val = f(_self, messages, **kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/llama_index/llms/ollama.py", line 102, in chat
    response = client.post(
               ^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpx/_client.py", line 1145, in post
    return self.request(
           ^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpx/_client.py", line 827, in request
    return self.send(request, auth=auth, follow_redirects=follow_redirects)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpx/_client.py", line 914, in send
    response = self._send_handling_auth(
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpx/_client.py", line 942, in _send_handling_auth
    response = self._send_handling_redirects(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
    response = self._send_single_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpx/_client.py", line 1015, in _send_single_request
    response = transport.handle_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 232, in handle_request
    with map_httpcore_exceptions():
  File "/usr/local/Cellar/python@3.11/3.11.7_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/Users/phil/Documents/Vault/.ollama-indexer/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ReadTimeout: timed out

This seemed more promising, but my inference speed with mistral-7b is much slower than gemma 2b and these errors suggest a simple timeout issue.

I hope this report helps you with further development.

brumik commented 7 months ago

Hello, thank you for reporting this issue, however I could not reproduce. It looks like this is a problem in one of the myriad of libraries that work under the hood with AI, not a problem in my script. I keep an eye out for it but cannot reproduce this as for now

replete commented 7 months ago

No problem, thanks for looking into this. This is the main problem I experience with python libraries, its not uncommon to have problems running applications in different environments.

When I get a moment, I will attempt to get this running on a Ubuntu VM, as I suspect it's MacOS environment related, which in my experience is sometimes problematic regards Python. In such an instance perhaps a docker instance would help.

Are there any videos of this plugin in action, by the way?

brumik commented 4 months ago

@replete Hello, I updated the lib to poetry and removed a bunch of libs that could be problematic (tensor and gpu stuff). Now the embedings should come from ollama (you can choose your own but you need to install it in ollama). Could you please give it a go again? https://ollama.com/blog/embedding-models

brumik commented 3 months ago

I expect that it went good, closing this issue. If any problems feel free to reopen.

replete commented 3 months ago

@brumik Apologies for not responding, I gave up entirely on obsidian RAG after trying every solution

replete commented 3 months ago

Might try again now you've added Docker, thanks for that