bhancockio / langchain-crash-course

258 stars 222 forks source link

Problems with vectorstores: "RuntimeError: Cannot open header file" #3

Open kauttoj opened 1 month ago

kauttoj commented 1 month ago

Thanks for this great course! I'm encountering a weird issue when creating those RAG vectorstores. Sometimes there appears a "index_metadata.pickle" file in a subfolder of a persistent vectorstore. If this file is present, vectorstore cannot be loaded as I get "RuntimeError: Cannot open header file" error. If I manually delete that pickle file, issue goes away.

After lots of testing with Langchain and Chroma, this event seems to occur only when vectorstore becomes large enough. Toy examples have no issues. I get this error with your RAG example "2a_rag_basics_metadata.py" which has over 13k chunks. I also get error for the "custom" type vectorstore in your "3_rag_text_splitting_deep_dive.py" example with over 1k items, while other 4 types work fine (no index file created).

Is there any workaround or reasonable cause for this issue? It appears some sort of bug in Langchain and/or Chroma...

EDIT: This appears to be a known issue with Chroma, discussed also here https://github.com/chroma-core/chroma/issues/872 Workaround is to increase HNSW cache limit from default 1000 to avoid writing that index file, e.g., by adding "collection_metadata={"hnsw:sync_threshold": 20000}" when creating a vectorstore. Hopefully this helps others running those RAG example codes.

ELBEQQAL94 commented 1 month ago

I have the same issue this is the solution that works for me:

`# Split the document into chunks with a maximum size of 100, with overlap for better context text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=20) docs = text_splitter.split_documents(documents)

# Display information about the split documents
print("\n--- Document Chunks Information ---")
print(f"Number of document chunks: {len(docs)}")
print(f"Sample chunk:\n{docs[0].page_content}\n")

# Create embeddings
print("\n--- Creating embeddings ---")
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")

# Initialize vector store and process documents in batches
print("\n--- Creating vector store ---")
batch_size = 166  # Maximum batch size allowed

# Initialize an empty vector store
db = Chroma(embedding_function=embeddings, persist_directory=persistent_directory)

# Process documents in batches
for i in range(0, len(docs), batch_size):
    batch_docs = docs[i:i + batch_size]
    db.add_documents(batch_docs)

print("\n--- Finished creating vector store ---")

`