Closed applepieiris closed 5 months ago
If you are using the llama_index==0.9.32, The code should be like this:
from elasticsearch import AsyncElasticsearch
from llama_index import GPTVectorStoreIndex
client = AsyncElasticsearch("localhost:9200", api_key='your key', verify_certs=False)
vector_store = ElasticsearchStore(
index_name="test_more",
es_client = client
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
vector_index = GPTVectorStoreIndex(
nodes, service_context=service_context,
storage_context=storage_context, show_progress=True)
Thank you for your addition, I will modify the Readme to remind other users to pay attention to this.
in bm25.py file, the code raise the following bug:
File "/opt/conda/envs/RAG/lib/python3.10/site-packages/llama_index/vector_stores/elasticsearch.py", line 246, in _create_index_if_not_exists if await self.client.indices.exists(index=index_name): TypeError: object HeadApiResponse can't be used in 'await' expression
I think the problem is caused by the llama_index version, I see in requirements.txt, 0.9.32 llama_index version do have this problem.