severian42 / GraphRAG-Local-UI

GraphRAG using Local LLMs - Features robust API and multiple apps for Indexing/Prompt Tuning/Query/Chat/Visualizing/Etc. This is meant to be the ultimate GraphRAG/KG local LLM app.
MIT License
1.69k stars 198 forks source link

Local search has the error: ValueError: Query vector size 384 does not match index column size 768 #9

Closed xuezhizeng closed 3 months ago

xuezhizeng commented 3 months ago

When choosing "global", it works well. However, when choosing "local" for any query, it gets below error:

Error: Traceback (most recent call last): File "/usr/local/conda3/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/local/conda3/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/root/msgraphrag_env/GraphRAG-Ollama-UI-main/graphrag/query/main.py", line 76, in run_local_search( File "/root/msgraphrag_env/GraphRAG-Ollama-UI-main/graphrag/query/cli.py", line 154, in run_local_search result = search_engine.search(query=query) File "/root/msgraphrag_env/GraphRAG-Ollama-UI-main/graphrag/query/structured_search/local_search/search.py", line 119, in search context_text, context_records = self.context_builder.build_context( File "/root/msgraphrag_env/GraphRAG-Ollama-UI-main/graphrag/query/structured_search/local_search/mixed_context.py", line 141, in build_context selected_entities = map_query_to_entities( File "/root/msgraphrag_env/GraphRAG-Ollama-UI-main/graphrag/query/context_builder/entity_extraction.py", line 55, in map_query_to_entities search_results = text_embedding_vectorstore.similarity_search_by_text( File "/root/msgraphrag_env/GraphRAG-Ollama-UI-main/graphrag/vector_stores/lancedb.py", line 120, in similarity_search_by_text return self.similarity_search_by_vector(query_embedding, k) File "/root/msgraphrag_env/GraphRAG-Ollama-UI-main/graphrag/vector_stores/lancedb.py", line 99, in similarity_search_by_vector .to_list() File "/usr/local/conda3/lib/python3.10/site-packages/lancedb/query.py", line 303, in to_list return self.to_arrow().to_pylist() File "/usr/local/conda3/lib/python3.10/site-packages/lancedb/query.py", line 528, in to_arrow return self.to_batches().read_all() File "/usr/local/conda3/lib/python3.10/site-packages/lancedb/query.py", line 558, in to_batches result_set = self._table._execute_query(query, batch_size) File "/usr/local/conda3/lib/python3.10/site-packages/lancedb/table.py", line 1623, in _execute_query return ds.scanner( File "/usr/local/conda3/lib/python3.10/site-packages/lance/dataset.py", line 336, in scanner builder = builder.nearest(**nearest) File "/usr/local/conda3/lib/python3.10/site-packages/lance/dataset.py", line 2150, in nearest raise ValueError( ValueError: Query vector size 384 does not match index column size 768

severian42 commented 3 months ago

This most recent update should solve the issues with indexing and creating the needed output files. Give it a try and report back if you still encounter any errors in indexing and then querying.

How to query the generated graph:

-If you are able to run the Indexing with no errors you will end up with the full output files. -Once you have those, you will need to initialize the folder within the Index Management tab. -Once you have initialized (should be 20 items in total) it will make the graph available to query with the LLM