AndrewVeee / nucleo-ai

An AI assistant beyond the chat box.
Apache License 2.0
313 stars 29 forks source link

AttributeError: 'NoneType' object has no attribute 'split' #17

Open PieBru opened 7 months ago

PieBru commented 7 months ago

Hi, nice tool! The installation went flawlessly. I then tried the "Research" stream and it returns the error AttributeError: 'NoneType' object has no attribute 'split'.

Here is the bash console log, up to just before my query:

(base) 18:20:26 piero@gl753vd  nucleo-ai $ ./start.sh 
Fri Mar  8 18:20:35 2024 * Loading modules
[Fri Mar  8 18:20:38 2024] * Loading config
[Fri Mar  8 18:20:38 2024] * Initializing database
[Fri Mar  8 18:20:38 2024] * Initializing RAG
[Fri Mar  8 18:20:40 2024] * Initializing webserver
[Fri Mar  8 18:20:40 2024] * Loading LLM
[Fri Mar  8 18:20:40 2024]   http://192.168.111.181:14434/v1 [key len=4] [model=mistral]
[Fri Mar  8 18:20:40 2024] * Starting job queue
[Fri Mar  8 18:20:40 2024] * Loading functions
[Fri Mar  8 18:20:40 2024]   Secretary: to do lists - ['add_todo']
[Fri Mar  8 18:20:40 2024]   Writer: responses, write docs - ['respond_user', 'create_doc']
[Fri Mar  8 18:20:40 2024]   Artist: take photos, draw, edit media - []
[Fri Mar  8 18:20:40 2024]   Researcher: find, research, look ups, web searches - ['web_search']
[Fri Mar  8 18:20:40 2024]   General: tasks not handled - []
[Fri Mar  8 18:20:40 2024] * Loading Models
[Fri Mar  8 18:20:40 2024]   Models: ['chat', 'assistant-orig', 'assistant', 'message', 'researcher', 'hi']
[Fri Mar  8 18:20:40 2024] * Starting!
 * Serving Flask app 'app.web_server'
 * Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on http://127.0.0.1:4742
Press CTRL+C to quit
127.0.0.1 - - [08/Mar/2024 18:20:57] "GET / HTTP/1.1" 302 -
127.0.0.1 - - [08/Mar/2024 18:20:57] "GET /index.html HTTP/1.1" 304 -
127.0.0.1 - - [08/Mar/2024 18:20:58] "POST /api//store/list HTTP/1.1" 308 -
127.0.0.1 - - [08/Mar/2024 18:20:58] "POST /api//store/list HTTP/1.1" 308 -
127.0.0.1 - - [08/Mar/2024 18:20:58] "POST /api//store/list HTTP/1.1" 308 -
127.0.0.1 - - [08/Mar/2024 18:20:58] "POST /api//store/list HTTP/1.1" 308 -
127.0.0.1 - - [08/Mar/2024 18:20:58] "POST /api//state/check_auth HTTP/1.1" 308 -
127.0.0.1 - - [08/Mar/2024 18:20:58] "GET /icons/help.svg HTTP/1.1" 304 -
127.0.0.1 - - [08/Mar/2024 18:20:58] "GET /icons/moon.svg HTTP/1.1" 304 -
127.0.0.1 - - [08/Mar/2024 18:20:58] "GET /icons/settings.svg HTTP/1.1" 304 -
[Fri Mar  8 18:20:58 2024] SELECT: select * from user_store where data_type=? order by created_at desc [['todo']]
[Fri Mar  8 18:20:58 2024] SELECT: select id, user_id, data_type, subtype, root_id, parent_type, parent_id, name, metadata, pinned, source, embed_model, ai_created, created_at, updated_at from user_store where data_type=? order by created_at desc [['doc']]
127.0.0.1 - - [08/Mar/2024 18:20:58] "POST /api/store/list HTTP/1.1" 200 -
[Fri Mar  8 18:20:58 2024] SELECT: select * from user_store where data_type=? order by created_at desc [['stream']]
[Fri Mar  8 18:20:58 2024] SELECT: select * from user_store where data_type=? order by created_at desc [['chat']]
127.0.0.1 - - [08/Mar/2024 18:20:58] "POST /api/store/list HTTP/1.1" 200 -
127.0.0.1 - - [08/Mar/2024 18:20:58] "POST /api/store/list HTTP/1.1" 200 -
127.0.0.1 - - [08/Mar/2024 18:20:58] "POST /api/state/check_auth HTTP/1.1" 200 -
127.0.0.1 - - [08/Mar/2024 18:20:58] "POST /api/store/list HTTP/1.1" 200 -

This the log after my query List 5 supplements to improve Insulin Resistance. :

127.0.0.1 - - [08/Mar/2024 18:23:44] "POST /api//v1/chat/completions HTTP/1.1" 308 -
127.0.0.1 - - [08/Mar/2024 18:23:44] "GET /icons/loader.svg HTTP/1.1" 304 -
[Fri Mar  8 18:23:44 2024] Using model: researcher ({'topic': 'List 5 supplements to improve Insulin Resistance.'})
[Fri Mar  8 18:23:44 2024] [Researcher:run_llm] [{'role': 'system', 'content': 'You are an advanced topic researcher. Given this topic:\n```\nList 5 supplements to improve Insulin Resistance.\n```\n\nList the 3-7 main research topics for the text. Keep them short.'}]
Job Exception: Connection error.
Exception in thread Thread-23 (handler):
Traceback (most recent call last):
  File "/usr/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.11/threading.py", line 982, in run
    self._target(*self._args, **self._kwargs)
  File "/mnt/00aadc36-3e91-4512-b272-3e84356ac527/Piero/AI_Lab/Github/nucleo-ai/backend/app/ai_models/researcher_model.py", line 305, in handler
    tq.queue.append(self.generate_block("\n".join(chain.parse_list(job.result, None))))
                                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/00aadc36-3e91-4512-b272-3e84356ac527/Piero/AI_Lab/Github/nucleo-ai/backend/ai_tools/prompt_chain/chain.py", line 12, in parse_list
    for line in output.split("\n"):
                ^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'split'

EDIT: forgot to include my config:

# If you make the app externally accessible, you should set a good
# secret key.
#auth_key = 'my s3cret k33y'
server_host = "127.0.0.1"
server_port = 4742
log_level = 3

[llm]
name = 'mistral'
default = true
type = 'openai'

# Set this to the port of your local instance or update to your API service and key.
openai_base_url = 'http://192.168.111.181:14434/v1'
openai_api_key = 'none'
openai_model = 'mistral'

# NOTE: Since a proper tokenizer isn't used, you should set this to about 2/3 of your
# actual max context size.
context_size = 2048

# Maximum number of completions at a time.
# For local servers (llama.cpp, oobabooga, etc), this should be set to 1, otherwise
# it might cut off a response to start a new one.
# If you're using an API/serving infrastructure, you can set this higher.
max_concurrent = 1

[embed]
# If you change the embedding model, change this name so Chroma will keep working.
db_name = "bge-small-en-v1.5"
#embed_model = "BAAI/bge-small-en-v1.5"
#rank_model = "cross-encoder/ms-marco-MiniLM-L-6-v2"

Thank you, Piero

PieBru commented 7 months ago

Got it, there was a typo in the URL of my ollama server, the port is 11434 instead of 14434. The console error didn't help me, so I would recommend to give meaningful errors for the most common problems, starting with network/server issues. I leave to you to close this issue.

BTW, your app gave the correct answers. Thank you much, that tool is impressive! Piero

AndrewVeee commented 7 months ago

Thanks so much!

I'm going to keep this open for a while - I really need to build a part of the UI for testing the connections and giving tips about where the problem is.

And of course this is an obvious bug that should be detected and handled. It's a problem through multiple LLM requests and should be easy to handle for each part.

I'm also really glad that researcher gave you a good answer - I'm so happy with the feedback so far!