simon987 / sist2

Lightning-fast file system indexer and search tool
GNU General Public License v3.0
865 stars 55 forks source link

"sist2 web module encountered an error while connecting to Elasticsearch. See server logs for more information." after relaunch/serve #476

Closed glottisfaun0000 closed 6 months ago

glottisfaun0000 commented 6 months ago

Device Information (please complete the following information):

docker-compose.yml

services:
  elasticsearch:
    image: elasticsearch:7.17.9
    restart: unless-stopped
    environment:
      - "discovery.type=single-node"
      - "ES_JAVA_OPTS=-Xms2g -Xmx2g"
  sist2-admin:
    image: simon987/sist2:3.3.4-x64-linux
    restart: unless-stopped
    volumes:
      - ./sist2-admin-data/:/sist2-admin/
      - /:/host
    ports:
      - 4090:4090 # sist2
      - 4091:8080 # sist2-admin
    working_dir: /root/sist2-admin/
    entrypoint: python3 /root/sist2-admin/sist2_admin/app.py

Describe the bug

Steps To Reproduce Please be specific!

  1. Docker compose up -d
  2. Index a job
  3. Serve frontend (works)
  4. Docker compose down && docker compose up -d
  5. Serve frontend for same job

sist2 web module encountered an error while connecting to Elasticsearch. See server logs for more information.

Expected behavior

  1. Web serve works after second launch

Actual Behavior

  1. "sist2 web module encountered an error while connecting to Elasticsearch. See server logs for more information." although admin server logs don't show anything relevant
  2. If I create a new job and index it, I can select that for the frontend, stop & go, and it will work. Then I can even select the old job/index and serve but webserver treats it as empty. Then if I docker compose down && docker compose up -d, the new job becomes broken in the same way.

Additional context Not sure if this is a permissions issue or what, but my installation is a pretty bog standard Docker compose run so I'm surprised if nobody else runs into this. When in the state where the frontend can't see the last index, Admin > backends > elasticsearch still tests succesfully. If this is expected behavior (you can't serve a frontend to browse an index created before the current run of the application) that seems highly limiting, I was hoping to use sist2 to index cold storage drives.

simon987 commented 6 months ago

although admin server logs don't show anything relevant

Could you check the contents of the logs/frontend-default.log ?

glottisfaun0000 commented 6 months ago

Ahh I had just been checking docker container logs sist2-admin, this looks more informative.

{"stderr": "T0 [2024-04-03 00:18:27] [INFO main.c] Loaded index: [test2]\n"}
{"stderr": "T0 [2024-04-03 00:18:27] [INFO serve.c] Starting web server @ http://0.0.0.0:4090\n"}
{"stderr": "T0 [2024-04-03 00:18:32] [WARNING serve.c] ElasticSearch error during query (404)\n"}
{"stderr": "T0 [2024-04-03 00:18:32] [WARNING serve.c] {\n"}
{"stderr": "\t\"error\":\t{\n"}
{"stderr": "\t\t\"root_cause\":\t[{\n"}
{"stderr": "\t\t\t\t\"type\":\t\"index_not_found_exception\",\n"}
{"stderr": "\t\t\t\t\"reason\":\t\"no such index [sist2]\",\n"}
{"stderr": "\t\t\t\t\"resource.type\":\t\"index_or_alias\",\n"}
{"stderr": "\t\t\t\t\"resource.id\":\t\"sist2\",\n"}
{"stderr": "\t\t\t\t\"index_uuid\":\t\"_na_\",\n"}
{"stderr": "\t\t\t\t\"index\":\t\"sist2\"\n"}
{"stderr": "\t\t\t}],\n"}
{"stderr": "\t\t\"type\":\t\"index_not_found_exception\",\n"}
{"stderr": "\t\t\"reason\":\t\"no such index [sist2]\",\n"}
{"stderr": "\t\t\"resource.type\":\t\"index_or_alias\",\n"}
{"stderr": "\t\t\"resource.id\":\t\"sist2\",\n"}
{"stderr": "\t\t\"index_uuid\":\t\"_na_\",\n"}
{"stderr": "\t\t\"index\":\t\"sist2\"\n"}
{"stderr": "\t},\n"}
{"stderr": "\t\"status\":\t404\n"}
{"stderr": "} \n"}
{"stderr": "T0 [2024-04-03 00:18:32] [WARNING serve.c] ElasticSearch error during query (404)\n"}
{"stderr": "T0 [2024-04-03 00:18:32] [WARNING serve.c] {\n"}
{"stderr": "\t\"error\":\t{\n"}
{"stderr": "\t\t\"root_cause\":\t[{\n"}
{"stderr": "\t\t\t\t\"type\":\t\"index_not_found_exception\",\n"}
{"stderr": "\t\t\t\t\"reason\":\t\"no such index [sist2]\",\n"}
{"stderr": "\t\t\t\t\"resource.type\":\t\"index_or_alias\",\n"}
{"stderr": "\t\t\t\t\"resource.id\":\t\"sist2\",\n"}
{"stderr": "\t\t\t\t\"index_uuid\":\t\"_na_\",\n"}
{"stderr": "\t\t\t\t\"index\":\t\"sist2\"\n"}
{"stderr": "\t\t\t}],\n"}
{"stderr": "\t\t\"type\":\t\"index_not_found_exception\",\n"}
{"stderr": "\t\t\"reason\":\t\"no such index [sist2]\",\n"}
{"stderr": "\t\t\"resource.type\":\t\"index_or_alias\",\n"}
{"stderr": "\t\t\"resource.id\":\t\"sist2\",\n"}
{"stderr": "\t\t\"index_uuid\":\t\"_na_\",\n"}
{"stderr": "\t\t\"index\":\t\"sist2\"\n"}
{"stderr": "\t},\n"}
{"stderr": "\t\"status\":\t404\n"}
{"stderr": "} \n"}
{"stderr": "T0 [2024-04-03 00:18:33] [WARNING serve.c] ElasticSearch error during query (404)\n"}
{"stderr": "T0 [2024-04-03 00:18:33] [WARNING serve.c] {\n"}
{"stderr": "\t\"error\":\t{\n"}
{"stderr": "\t\t\"root_cause\":\t[{\n"}
{"stderr": "\t\t\t\t\"type\":\t\"index_not_found_exception\",\n"}
{"stderr": "\t\t\t\t\"reason\":\t\"no such index [sist2]\",\n"}
{"stderr": "\t\t\t\t\"resource.type\":\t\"index_or_alias\",\n"}
{"stderr": "\t\t\t\t\"resource.id\":\t\"sist2\",\n"}
{"stderr": "\t\t\t\t\"index_uuid\":\t\"_na_\",\n"}
{"stderr": "\t\t\t\t\"index\":\t\"sist2\"\n"}
{"stderr": "\t\t\t}],\n"}
{"stderr": "\t\t\"type\":\t\"index_not_found_exception\",\n"}
{"stderr": "\t\t\"reason\":\t\"no such index [sist2]\",\n"}
{"stderr": "\t\t\"resource.type\":\t\"index_or_alias\",\n"}
{"stderr": "\t\t\"resource.id\":\t\"sist2\",\n"}
{"stderr": "\t\t\"index_uuid\":\t\"_na_\",\n"}
{"stderr": "\t\t\"index\":\t\"sist2\"\n"}
{"stderr": "\t},\n"}
{"stderr": "\t\"status\":\t404\n"}
{"stderr": "} \n"}
{"stderr": "T0 [2024-04-03 00:18:33] [WARNING serve.c] ElasticSearch error during query (404)\n"}
{"stderr": "T0 [2024-04-03 00:18:33] [WARNING serve.c] {\n"}
{"stderr": "\t\"error\":\t{\n"}
{"stderr": "\t\t\"root_cause\":\t[{\n"}
{"stderr": "\t\t\t\t\"type\":\t\"index_not_found_exception\",\n"}
{"stderr": "\t\t\t\t\"reason\":\t\"no such index [sist2]\",\n"}
{"stderr": "\t\t\t\t\"resource.type\":\t\"index_or_alias\",\n"}
{"stderr": "\t\t\t\t\"resource.id\":\t\"sist2\",\n"}
{"stderr": "\t\t\t\t\"index_uuid\":\t\"_na_\",\n"}
{"stderr": "\t\t\t\t\"index\":\t\"sist2\"\n"}
{"stderr": "\t\t\t}],\n"}
{"stderr": "\t\t\"type\":\t\"index_not_found_exception\",\n"}
{"stderr": "\t\t\"reason\":\t\"no such index [sist2]\",\n"}
{"stderr": "\t\t\"resource.type\":\t\"index_or_alias\",\n"}
{"stderr": "\t\t\"resource.id\":\t\"sist2\",\n"}
{"stderr": "\t\t\"index_uuid\":\t\"_na_\",\n"}
{"stderr": "\t\t\"index\":\t\"sist2\"\n"}
{"stderr": "\t},\n"}
{"stderr": "\t\"status\":\t404\n"}
{"stderr": "} \n"}

Maybe it's not able to find the .sist2 file specified? Docker persistent storage issue?

simon987 commented 6 months ago

I think what is happening is that docker-compose down will remove the elasticsearch data. sist2 can't start because the elasticsearch index is gone.

To add persistance, you can add this in your compose file:

  elasticsearch:
    volumes:
      - /path/to/elasticsearch/data/:/usr/share/elasticsearch/data
glottisfaun0000 commented 6 months ago

Awesome, I got it working with that but initially the elasticsearch service wasn't able to write to the volume/directory until I added a PGID and PUID to the environment and chown'd the directory with the same. Thanks!