zylon-ai / private-gpt

Interact with your documents using the power of GPT, 100% privately, no data leaks
https://docs.privategpt.dev
Apache License 2.0
53.07k stars 7.13k forks source link

Elasticsearch - There is no current event loop in thread 'AnyIO worker thread' #1984

Open mschwartzRDR opened 3 weeks ago

mschwartzRDR commented 3 weeks ago

Hi, I want to add Elasticsearch as a VectorDB. For this, I changed and created differents settings to make it work. But when I upload a file I have this issue :

$ PGPT_PROFILES=ollama make run poetry run python -m private_gpt 15:18:52.347 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default', 'ollama'] 15:18:58.296 [DEBUG ] private_gpt.server.utils.auth - Defining a dummy authentication mechanism for fastapi, always authenticating requests 15:18:58.359 [DEBUG ] private_gpt.launcher - Setting up CORS middleware 15:18:58.359 [DEBUG ] private_gpt.launcher - Importing the UI module 15:18:58.682 [DEBUG ] httpx - load_ssl_context verify=True cert=None trust_env=True http2=False 15:18:58.683 [DEBUG ] httpx - load_verify_locations cafile='/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/certifi/cacert.pem' 15:18:58.739 [DEBUG ] httpx - load_ssl_context verify=True cert=None trust_env=True http2=False 15:18:58.740 [DEBUG ] httpx - load_verify_locations cafile='/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/certifi/cacert.pem' 15:18:58.872 [DEBUG ] httpx - load_ssl_context verify=True cert=None trust_env=True http2=False 15:18:58.873 [DEBUG ] httpx - load_verify_locations cafile='/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/certifi/cacert.pem' 15:18:59.051 [DEBUG ] PIL.Image - Importing BlpImagePlugin 15:18:59.052 [DEBUG ] PIL.Image - Importing BmpImagePlugin 15:18:59.053 [DEBUG ] PIL.Image - Importing BufrStubImagePlugin 15:18:59.053 [DEBUG ] PIL.Image - Importing CurImagePlugin 15:18:59.054 [DEBUG ] PIL.Image - Importing DcxImagePlugin 15:18:59.055 [DEBUG ] PIL.Image - Importing DdsImagePlugin 15:18:59.059 [DEBUG ] PIL.Image - Importing EpsImagePlugin 15:18:59.060 [DEBUG ] PIL.Image - Importing FitsImagePlugin 15:18:59.060 [DEBUG ] PIL.Image - Importing FliImagePlugin 15:18:59.061 [DEBUG ] PIL.Image - Importing FpxImagePlugin 15:18:59.061 [DEBUG ] PIL.Image - Image: failed to import FpxImagePlugin: No module named 'olefile' 15:18:59.061 [DEBUG ] PIL.Image - Importing FtexImagePlugin 15:18:59.062 [DEBUG ] PIL.Image - Importing GbrImagePlugin 15:18:59.062 [DEBUG ] PIL.Image - Importing GifImagePlugin 15:18:59.066 [DEBUG ] PIL.Image - Importing GribStubImagePlugin 15:18:59.066 [DEBUG ] PIL.Image - Importing Hdf5StubImagePlugin 15:18:59.067 [DEBUG ] PIL.Image - Importing IcnsImagePlugin 15:18:59.069 [DEBUG ] PIL.Image - Importing IcoImagePlugin 15:18:59.069 [DEBUG ] PIL.Image - Importing ImImagePlugin 15:18:59.071 [DEBUG ] PIL.Image - Importing ImtImagePlugin 15:18:59.072 [DEBUG ] PIL.Image - Importing IptcImagePlugin 15:18:59.072 [DEBUG ] PIL.Image - Importing JpegImagePlugin 15:18:59.073 [DEBUG ] PIL.Image - Importing Jpeg2KImagePlugin 15:18:59.073 [DEBUG ] PIL.Image - Importing McIdasImagePlugin 15:18:59.073 [DEBUG ] PIL.Image - Importing MicImagePlugin 15:18:59.074 [DEBUG ] PIL.Image - Image: failed to import MicImagePlugin: No module named 'olefile' 15:18:59.074 [DEBUG ] PIL.Image - Importing MpegImagePlugin 15:18:59.074 [DEBUG ] PIL.Image - Importing MpoImagePlugin 15:18:59.076 [DEBUG ] PIL.Image - Importing MspImagePlugin 15:18:59.077 [DEBUG ] PIL.Image - Importing PalmImagePlugin 15:18:59.077 [DEBUG ] PIL.Image - Importing PcdImagePlugin 15:18:59.077 [DEBUG ] PIL.Image - Importing PcxImagePlugin 15:18:59.077 [DEBUG ] PIL.Image - Importing PdfImagePlugin 15:18:59.085 [DEBUG ] PIL.Image - Importing PixarImagePlugin 15:18:59.085 [DEBUG ] PIL.Image - Importing PngImagePlugin 15:18:59.086 [DEBUG ] PIL.Image - Importing PpmImagePlugin 15:18:59.087 [DEBUG ] PIL.Image - Importing PsdImagePlugin 15:18:59.088 [DEBUG ] PIL.Image - Importing QoiImagePlugin 15:18:59.089 [DEBUG ] PIL.Image - Importing SgiImagePlugin 15:18:59.090 [DEBUG ] PIL.Image - Importing SpiderImagePlugin 15:18:59.090 [DEBUG ] PIL.Image - Importing SunImagePlugin 15:18:59.091 [DEBUG ] PIL.Image - Importing TgaImagePlugin 15:18:59.091 [DEBUG ] PIL.Image - Importing TiffImagePlugin 15:18:59.091 [DEBUG ] PIL.Image - Importing WebPImagePlugin 15:18:59.092 [DEBUG ] PIL.Image - Importing WmfImagePlugin 15:18:59.093 [DEBUG ] PIL.Image - Importing XbmImagePlugin 15:18:59.094 [DEBUG ] PIL.Image - Importing XpmImagePlugin 15:18:59.094 [DEBUG ] PIL.Image - Importing XVThumbImagePlugin 15:19:00.637 [DEBUG ] urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443 15:19:00.833 [DEBUG ] urllib3.connectionpool - https://huggingface.co:443 "HEAD /mistralai/Mistral-7B-Instruct-v0.2/resolve/main/tokenizer_config.json HTTP/11" 200 0 15:19:00.923 [INFO ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=ollama 15:19:01.053 [DEBUG ] asyncio - Using selector: EpollSelector 15:19:01.054 [INFO ] private_gpt.components.embedding.embedding_component - Initializing the embedding model in mode=huggingface 15:19:01.095 [INFO ] sentence_transformers.SentenceTransformer - Load pretrained SentenceTransformer: BAAI/bge-small-en-v1.5 15:19:01.205 [DEBUG ] urllib3.connectionpool - https://huggingface.co:443 "HEAD /BAAI/bge-small-en-v1.5/resolve/main/modules.json HTTP/11" 200 0 15:19:01.317 [DEBUG ] urllib3.connectionpool - https://huggingface.co:443 "HEAD /BAAI/bge-small-en-v1.5/resolve/main/config_sentence_transformers.json HTTP/11" 200 0 15:19:01.455 [DEBUG ] urllib3.connectionpool - https://huggingface.co:443 "HEAD /BAAI/bge-small-en-v1.5/resolve/main/README.md HTTP/11" 200 0 15:19:01.575 [DEBUG ] urllib3.connectionpool - https://huggingface.co:443 "HEAD /BAAI/bge-small-en-v1.5/resolve/main/modules.json HTTP/11" 200 0 15:19:01.687 [DEBUG ] urllib3.connectionpool - https://huggingface.co:443 "HEAD /BAAI/bge-small-en-v1.5/resolve/main/sentence_bert_config.json HTTP/11" 200 0 15:19:01.689 [WARNING ] py.warnings - /home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True. warnings.warn(

15:19:01.811 [DEBUG ] urllib3.connectionpool - https://huggingface.co:443 "HEAD /BAAI/bge-small-en-v1.5/resolve/main/config.json HTTP/11" 200 0 15:19:02.406 [DEBUG ] urllib3.connectionpool - https://huggingface.co:443 "HEAD /BAAI/bge-small-en-v1.5/resolve/main/tokenizer_config.json HTTP/11" 200 0 15:19:02.591 [DEBUG ] urllib3.connectionpool - https://huggingface.co:443 "GET /api/models/BAAI/bge-small-en-v1.5/revision/main HTTP/11" 200 148767 15:19:02.767 [INFO ] sentence_transformers.SentenceTransformer - 2 prompts are loaded, with the keys: ['query', 'text'] 15:19:02.767 [DEBUG ] llama_index.core.storage.kvstore.simple_kvstore - Loading llama_index.core.storage.kvstore.simple_kvstore from /home/matthieuschwartz/roedererGPT/local_data/private_gpt/index_store.json. 15:19:02.767 [DEBUG ] fsspec.local - open file: /home/matthieuschwartz/roedererGPT/local_data/private_gpt/index_store.json 15:19:02.768 [DEBUG ] llama_index.core.storage.kvstore.simple_kvstore - Loading llama_index.core.storage.kvstore.simple_kvstore from /home/matthieuschwartz/roedererGPT/local_data/private_gpt/docstore.json. 15:19:02.768 [DEBUG ] fsspec.local - open file: /home/matthieuschwartz/roedererGPT/local_data/private_gpt/docstore.json 15:19:02.768 [DEBUG ] private_gpt.components.ingest.ingest_component - Initializing base ingest component type=SimpleIngestComponent 15:19:02.768 [DEBUG ] private_gpt.components.ingest.ingest_component - Rentré dans le initialize 15:19:02.768 [INFO ] private_gpt.components.ingest.ingest_component - No creating a new vector store index 15:19:02.768 [DEBUG ] private_gpt.components.ingest.ingest_component - No creating a new vector store index 15:19:02.768 [INFO ] llama_index.core.indices.loading - Loading all indices. 15:19:02.771 [DEBUG ] private_gpt.ui.ui - Creating the UI blocks 15:19:03.193 [DEBUG ] matplotlib - matplotlib data path: /home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/matplotlib/mpl-data 15:19:03.199 [DEBUG ] matplotlib - CONFIGDIR=/home/matthieuschwartz/.config/matplotlib 15:19:03.200 [DEBUG ] matplotlib - interactive is False 15:19:03.200 [DEBUG ] matplotlib - platform is linux 15:19:03.255 [DEBUG ] matplotlib - CACHEDIR=/home/matthieuschwartz/.cache/matplotlib 15:19:03.258 [DEBUG ] matplotlib.font_manager - Using fontManager instance from /home/matthieuschwartz/.cache/matplotlib/fontlist-v390.json 15:19:03.481 [DEBUG ] asyncio - Using selector: EpollSelector 15:19:03.483 [WARNING ] py.warnings - /home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/gradio/utils.py:1000: UserWarning: Expected 3 arguments for function <function submit_feedback at 0x77d57622c9a0>, received 1. warnings.warn(

15:19:03.483 [WARNING ] py.warnings - /home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/gradio/utils.py:1004: UserWarning: Expected at least 3 arguments for function <function submit_feedback at 0x77d57622c9a0>, received 1. warnings.warn(

15:19:03.661 [INFO ] private_gpt.ui.ui - Mounting the gradio UI, at path=/ 15:19:03.716 [INFO ] uvicorn.error - Started server process [91027] 15:19:03.716 [INFO ] uvicorn.error - Waiting for application startup. 15:19:03.717 [INFO ] uvicorn.error - Application startup complete. 15:19:03.717 [INFO ] uvicorn.error - Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit) 15:19:06.702 [INFO ] uvicorn.access - 127.0.0.1:36752 - "GET / HTTP/1.1" 200 15:19:06.806 [INFO ] uvicorn.access - 127.0.0.1:36752 - "GET /info HTTP/1.1" 200 15:19:06.815 [INFO ] uvicorn.access - 127.0.0.1:36760 - "GET /theme.css?v=100fecf9b6f943e2a9d4059c81be191f3247795d7babff175b481229879e5c02 HTTP/1.1" 200 15:19:06.816 [INFO ] uvicorn.access - 127.0.0.1:36752 - "GET /heartbeat/6ieg6lo4haq HTTP/1.1" 200 15:19:07.016 [DEBUG ] matplotlib.pyplot - Loaded backend agg version v2.2. 15:19:07.018 [INFO ] uvicorn.access - 127.0.0.1:36760 - "POST /run/predict HTTP/1.1" 200 15:19:07.019 [INFO ] uvicorn.access - 127.0.0.1:36770 - "POST /queue/join HTTP/1.1" 200 15:19:07.043 [INFO ] uvicorn.access - 127.0.0.1:36760 - "GET /queue/data?session_hash=6ieg6lo4haq HTTP/1.1" 200 15:19:07.107 [INFO ] uvicorn.access - 127.0.0.1:36760 - "POST /queue/join HTTP/1.1" 200 15:19:07.112 [INFO ] uvicorn.access - 127.0.0.1:36760 - "GET /queue/data?session_hash=6ieg6lo4haq HTTP/1.1" 200 15:19:11.238 [DEBUG ] multipart.multipart - Calling on_part_begin with no data 15:19:11.238 [DEBUG ] multipart.multipart - Calling on_header_field with data[61:80] 15:19:11.238 [DEBUG ] multipart.multipart - Calling on_header_value with data[82:128] 15:19:11.238 [DEBUG ] multipart.multipart - Calling on_header_end with no data 15:19:11.238 [DEBUG ] multipart.multipart - Calling on_header_field with data[130:142] 15:19:11.238 [DEBUG ] multipart.multipart - Calling on_header_value with data[144:159] 15:19:11.238 [DEBUG ] multipart.multipart - Calling on_header_end with no data 15:19:11.238 [DEBUG ] multipart.multipart - Calling on_headers_finished with no data 15:19:11.238 [DEBUG ] multipart.multipart - Calling on_part_data with data[163:5845] 15:19:11.239 [DEBUG ] multipart.multipart - Calling on_part_data with data[0:1] 15:19:11.239 [DEBUG ] multipart.multipart - Calling on_part_data with data[5846:18973] 15:19:11.239 [DEBUG ] multipart.multipart - Calling on_part_end with no data 15:19:11.239 [DEBUG ] multipart.multipart - Calling on_end with no data 15:19:11.241 [INFO ] uvicorn.access - 127.0.0.1:36760 - "POST /upload HTTP/1.1" 200 15:19:11.263 [INFO ] uvicorn.access - 127.0.0.1:36760 - "POST /queue/join HTTP/1.1" 200 15:19:11.270 [INFO ] uvicorn.access - 127.0.0.1:36760 - "GET /queue/data?session_hash=6ieg6lo4haq HTTP/1.1" 200 15:19:11.281 [DEBUG ] private_gpt.ui.ui - Loading count=1 files 15:19:11.281 [INFO ] private_gpt.server.ingest.ingest_service - Ingesting file_names=['sample.pdf'] 15:19:11.281 [DEBUG ] private_gpt.components.ingest.ingest_component - Début du bulk ingest 15:19:11.281 [DEBUG ] private_gpt.components.ingest.ingest_helper - Transforming file_name=sample.pdf into documents 15:19:11.281 [DEBUG ] private_gpt.components.ingest.ingest_helper - Specific reader found for extension=.pdf 15:19:11.339 [DEBUG ] fsspec.local - open file: /tmp/gradio/1f59e84376b2acda296b1b431a16e5cd5dfb7da8/sample.pdf 15:19:11.364 [DEBUG ] private_gpt.components.ingest.ingest_helper - Excluding metadata from count=1 documents 15:19:11.365 [DEBUG ] private_gpt.components.ingest.ingest_component - Transforming count=1 documents into nodes 15:19:11.365 [DEBUG ] private_gpt.components.ingest.ingest_component - Rentré dans _save_docs 15:19:11.365 [DEBUG ] private_gpt.components.ingest.ingest_component - Rentré dans le with 15:19:11.365 [DEBUG ] private_gpt.components.ingest.ingest_component - Rentré dans le for Parsing nodes: 0%| | 0/1 [00:00<?, ?it/s]15:19:11.369 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Sample PDFThis is a simple PDF file. 15:19:11.369 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Fun fun fun.Lorem ipsum dolor sit amet, consect... 15:19:11.369 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Phasellus facilisis odio sed mi. 15:19:11.369 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Curabitur suscipit. 15:19:11.369 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Nullam vel nisi. 15:19:11.369 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Etiam semper ipsum ut lectus. 15:19:11.369 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Proin aliquam, erat eget pharetra commodo, eros... 15:19:11.369 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Integer a erat. 15:19:11.369 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Cras laoreet ligula cursus enim. 15:19:11.370 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Aenean scelerisque velit et tellus. 15:19:11.370 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Vestibulum dictum aliquet sem. 15:19:11.370 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Nulla facilisi. 15:19:11.370 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Vestibulum accumsan ante vitae elit. 15:19:11.370 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Nulla erat dolor, blandit in, rutrum quis, semp... 15:19:11.370 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Nullam varius congue risus. 15:19:11.370 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Vivamus sollicitudin, metus ut interdum eleifen... 15:19:11.370 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Suspendisse libero odio, mattis sit amet, aliqu... 15:19:11.370 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Sed vitae augue. 15:19:11.370 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Aliquam erat volutpat. 15:19:11.370 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Aliquam feugiat vulputate nisl. 15:19:11.371 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Suspendisse quis nulla pretium ante pretium mol... 15:19:11.371 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Proin velit ligula, sagittis at, egestas a, pul... 15:19:11.371 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Praesent pulvinar, nunc quis iaculis sagittis, ... 15:19:11.371 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Nunc cursus ligula. 15:19:11.371 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Nulla facilisi. 15:19:11.371 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Phasellus ullamcorper consectetuer ante. 15:19:11.372 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Duis tincidunt, urna id condimentum luctus, nib... 15:19:11.372 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Pellentesque vestibulum convallis sem. 15:19:11.372 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Nulla consequat quam ut nisl. 15:19:11.372 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Nullam est. 15:19:11.372 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Curabitur tincidunt dapibus lorem. 15:19:11.372 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Proin velit turpis, scelerisque sit amet, iacul... 15:19:11.372 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Phasellus lorem arcu, feugiat eu, gravida eu, c... 15:19:11.373 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Nullam vel est ut ipsum volutpat feugiat. 15:19:11.373 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Aenean pellentesque.In mauris. 15:19:11.373 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Pellentesque dui nisi, iaculis eu, rhoncus in, ... 15:19:11.373 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Ut odio justo, scelerisque vel, facilisis non, ... 15:19:11.373 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Cras nec massa sit amet tortor volutpat varius. 15:19:11.373 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Donec lacinia, neque a luctus aliquet, pede mas... 15:19:11.374 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Fusce erat nibh, aliquet in, eleifend eget, com... 15:19:11.374 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Fusce consectetuer. 15:19:11.374 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Cras risus tortor, porttitor nec, tristique sed... 15:19:11.374 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Fusce vulputate ipsum a mauris. 15:19:11.374 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Phasellus mollis. 15:19:11.375 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Curabitur sed urna. 15:19:11.375 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Aliquam nec sapien non nibh pulvinar convallis. 15:19:11.375 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Vivamus facilisis augue quis quam. 15:19:11.375 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Proin cursus aliquet metus. 15:19:11.375 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Suspendisse lacinia. 15:19:11.375 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Nulla at tellus ac turpis eleifend scelerisque. 15:19:11.376 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Maecenas a pede vitae enim commodo interdum. 15:19:11.376 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Donec odio. 15:19:11.376 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Sed sollicitudin dui vitae justo.Morbi elit nun... 15:19:11.376 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Suspendisse eget mauris eu tellus molestie curs... 15:19:11.376 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Duis ut magna at justo dignissim condimentum. 15:19:11.377 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Cum sociis natoque penatibus et magnis dis part... 15:19:11.377 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Vivamus varius. 15:19:11.377 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Ut sit amet diam suscipit mauris ornare aliquam. 15:19:11.377 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Sed varius. 15:19:11.377 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Duis arcu. 15:19:11.377 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Etiam tristique massa eget dui. 15:19:11.378 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Phasellus congue. 15:19:11.378 [DEBUG ] llama_index.core.node_parser.node_utils - > Adding chunk: Aenean est erat, tincidunt eget, venenatis quis... Parsing nodes: 100%|██████████████████████████████| 1/1 [00:00<00:00, 78.10it/s] Batches: 100%|████████████████████████████████████| 1/1 [00:00<00:00, 1.08it/s] Batches: 100%|████████████████████████████████████| 1/1 [00:00<00:00, 2.54it/s] Batches: 100%|████████████████████████████████████| 1/1 [00:00<00:00, 2.27it/s] Batches: 100%|████████████████████████████████████| 1/1 [00:00<00:00, 3.80it/s] Batches: 100%|████████████████████████████████████| 1/1 [00:00<00:00, 5.11it/s] Batches: 100%|████████████████████████████████████| 1/1 [00:00<00:00, 4.31it/s] Batches: 100%|████████████████████████████████████| 1/1 [00:00<00:00, 12.62it/s] Generating embeddings: 100%|████████████████████| 63/63 [00:02<00:00, 24.09it/s] Generating embeddings: 0it [00:00, ?it/s] Traceback (most recent call last): File "/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/gradio/queueing.py", line 532, in process_events response = await route_utils.call_process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/gradio/route_utils.py", line 276, in call_process_api output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/gradio/blocks.py", line 1928, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/gradio/blocks.py", line 1514, in call_function prediction = await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "/usr/lib/python3.11/asyncio/futures.py", line 287, in await yield self # This tells Task to wait for completion. ^^^^^^^^^^ File "/usr/lib/python3.11/asyncio/tasks.py", line 349, in __wakeup future.result() File "/usr/lib/python3.11/asyncio/futures.py", line 203, in result raise self._exception.with_traceback(self._exception_tb) File "/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 859, in run result = context.run(func, args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/gradio/utils.py", line 832, in wrapper response = f(args, kwargs) ^^^^^^^^^^^^^^^^^^ File "/home/matthieuschwartz/roedererGPT/private_gpt/ui/ui.py", line 354, in _upload_file self._ingest_service.bulk_ingest([(str(path.name), path) for path in paths]) File "/home/matthieuschwartz/roedererGPT/private_gpt/server/ingest/ingest_service.py", line 87, in bulk_ingest documents = self.ingest_component.bulk_ingest(files) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/matthieuschwartz/roedererGPT/private_gpt/components/ingest/ingest_component.py", line 162, in bulk_ingest saved_documents.extend(self._save_docs(documents)) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/matthieuschwartz/roedererGPT/private_gpt/components/ingest/ingest_component.py", line 173, in _save_docs self._index.insert(document, show_progress=True) ## prob ici ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/llama_index/core/indices/base.py", line 246, in insert self.insert_nodes(nodes, insert_kwargs) File "/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/llama_index/core/indices/vector_store/base.py", line 330, in insert_nodes self._insert(nodes, insert_kwargs) File "/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/llama_index/core/indices/vector_store/base.py", line 312, in _insert self._add_nodes_to_index(self._index_struct, nodes, insert_kwargs) File "/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/llama_index/core/indices/vector_store/base.py", line 234, in _add_nodes_to_index new_ids = self._vector_store.add(nodes_batch, **insert_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/llama_index/vector_stores/elasticsearch/base.py", line 298, in add return asyncio.get_event_loop().run_until_complete( ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/matthieuschwartz/roedererGPT/.env/lib/python3.11/site-packages/nest_asyncio.py", line 40, in _get_event_loop loop = events.get_event_loop_policy().get_event_loop() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/asyncio/events.py", line 681, in get_event_loop raise RuntimeError('There is no current event loop in thread %r.' RuntimeError: There is no current event loop in thread 'AnyIO worker thread'.

It seems that there is an async Error. My new settings are :

        case "elasticsearch":

            # from llama_index.vector_stores.elasticsearch import ElasticsearchStore

            try:
                from elasticsearch import Elasticsearch
                from llama_index.vector_stores.elasticsearch import (  # type: ignore
                    ElasticsearchStore,
                )
            except ImportError as e:
                raise ImportError(
                    "Elasticsearch dependencies not found, install avec `--extras elasticsearch`"
                ) from e

            from elasticsearch import AsyncElasticsearch

            es_client = AsyncElasticsearch("http://localhost:9200/")

            self.vector_store = typing.cast(
                VectorStore,
                ElasticsearchStore(
                    index_name="make_this_parameterizable_per_api_call",
                    es_client = es_client,
                    # es_url = "http://localhost:9200",
                )
            )

I'm runnig ElasticSearch with Docker and have no connexion problem. I modified the settings (of course) in settings-ollama.yaml : vectorstore: database: elasticsearch elasticsearch: host: localhost port: 9200

and in settings.py : class VectorstoreSettings(BaseModel): database: Literal["chroma", "qdrant", "postgres", "elasticsearch"]

It seems like embeddings are generated but can't be added in the vector_store...

Thank you for your help.

jaluma commented 2 weeks ago

Could you try using the sync version? Additionally, could you upload your changes and open a PR? It's an interesting contribution, and it will be easier to identify the error.