Closed Stamenov closed 1 day ago
Here is my request payload:
* uri: neo4j+s://123.databases.neo4j.io:7687
* database: neo4j
* userName: neo4j
* password: pass
* question: this is a question
* session_id: 123
* model: openai-gpt-4o
* mode: graph+vector+fulltext
* document_names: (sent empty)
Here are my logs:
2024-10-21 15:42:35,102 - QA_RAG called at 2024-10-21 15:42:35.102360
2024-10-21 15:42:36,090 - Chat Mode : graph+vector+fulltext
2024-10-21 15:42:36,720 - Model: LLM_MODEL_CONFIG_openai-gpt-4o
2024-10-21 15:42:36,818 - Model created - Model Version: openai-gpt-4o
2024-10-21 15:42:36,818 - Model called in chat openai-gpt-4o and model version is gpt-4o-2024-08-06
2024-10-21 15:42:37,024 - Successfully retrieved Neo4jVector index 'vector'
2024-10-21 15:42:37,024 - Error retrieving Neo4jVector index 'vector' or creating retriever: the JSON object must be str, bytes or bytearray, not NoneType
2024-10-21 15:42:37,024 - Exception in QA component at 2024-10-21 15:42:37.024434: An error occurred while retrieving the Neo4jVector index '{index_name}' or creating the retriever. Please drop and create a new vector index: {e}
Traceback (most recent call last):
File "/home/ubuntu/llm-graph-builder/backend/src/QA_integration_new.py", line 73, in get_neo4j_retriever
document_names= list(map(str.strip, json.loads(document_names)))
File "/usr/lib/python3.10/json/__init__.py", line 339, in loads
raise TypeError(f'the JSON object must be str, bytes or bytearray, '
TypeError: the JSON object must be str, bytes or bytearray, not NoneType
Hi @Stamenov can you try dropping all vector indexes present in your DB and then try again.
@Stamenov Try not to send empty document_names if "vector" is in your mode. For example "graph+vector" or "graph+vector+fulltext" as in your code.
If you do not want to specify any documents, use [].
The format for document_names should be ["document1.txt", "document1.txt" ...]
Hi @Stamenov are you able to solve this issue?
I was able to solve the issue. Basically the code is not stable at all for user input - small changes to input like space between "+" and "vector" is not in any way sanitized,e.g.: backend/src/QA_integration_new.py if mode == "fulltext" or mode == "graph + vector + fulltext":
Not to mention undefined variables later on - "document_names" and small stuff like this.
Hi,
I have been using the https://llm-graph-builder.neo4jlabs.com/ to build a KG by connecting to a free Aura instance. I have a bunch of nodes and relationships as can been seen in the Inspect Generated Graph on the llm-graph-builder webapp.
Now I would like to use the local backend setup from this repo, connecting to the Aura instance and send queries to backend which can communicate with the remote Aura instance (connection has been checked and works).
Using the /chat_bot_chat_bot_post in the fastapi /docs endpoint is causing me problems, while it works online::
One parameter that was challenging was actually the session_id, which basically I have copied by inspecting the request payload on the webapp. As mentioned db credentials and connection there work.
Thanks in advance.