Overv / openstreetmap-tile-server

Docker file for a minimal effort OpenStreetMap tile server
Apache License 2.0
1.2k stars 482 forks source link

Segmentation fault during import #400

Open Eric64534 opened 10 months ago

Eric64534 commented 10 months ago

Any ideas?? Here's the log:

tech@4YF0GT3:/mnt/c/Windows/system32$ docker volume create osm-data osm-data tech@4YF0GT3:/mnt/c/Windows/system32$ docker run \ -v /mnt/e/planet-231002.osm.pbf:/data/region.osm.pbf \ -v osm-data:/data/database/ \ -e "FLAT_NODES=enabled" \ -e "OSM2PGSQL_EXTRA_ARGS=-C 8192" \ -e THREADS=18 \ overv/openstreetmap-tile-server \ import Unable to find image 'overv/openstreetmap-tile-server:latest' locally latest: Pulling from overv/openstreetmap-tile-server 6b851dcae6ca: Pull complete bb071db28a33: Pull complete a862358c8622: Pull complete 57ea4a769709: Pull complete 7637307ae531: Pull complete fa1264c2a6e6: Pull complete 5591dd38bd35: Pull complete c82b1fb2ea15: Pull complete 47ecf7831eac: Pull complete 18125c639464: Pull complete 2b0335ae4896: Pull complete 4c9807a2e2f3: Pull complete 649b76086468: Pull complete b52cd0b2e16e: Pull complete 9874a5bac373: Pull complete acd614dbc6df: Pull complete 8a950b4e43d0: Pull complete 269690f5a06c: Pull complete 19ed3f017280: Pull complete 4b7f8bb34a12: Pull complete 25b1f3b221b7: Pull complete 0056ac6bf411: Pull complete 57110a116d22: Pull complete 208240568089: Pull complete Digest: sha256:b6a79da39b6d0758368f7c62d22e49dd3ec59e78b194a5ef9dee2723b1f3fa79 Status: Downloaded newer image for overv/openstreetmap-tile-server:latest ++ ls -A /data/style/

shared_buffers = 128MB min_wal_size = 1GB

max_wal_size = 2GB # Overridden below

maintenance_work_mem = 256MB

Suggested settings from

https://github.com/openstreetmap/chef/blob/master/roles/tile.rb#L38-L45

max_connections = 250 temp_buffers = 32MB work_mem = 128MB wal_buffers = 1024kB wal_writer_delay = 500ms commit_delay = 10000

checkpoint_segments = 60 # unrecognized in psql 10.7.1

max_wal_size = 2880MB random_page_cost = 1.1 track_activity_query_size = 16384 autovacuum_vacuum_scale_factor = 0.05 autovacuum_analyze_scale_factor = 0.02

listen_addresses = '*' autovacuum = on

Istador commented 10 months ago

18 THREADS seems too high, that'd result in up to (3 + (1+18) * 14) = 269 connections, whereas max_connections in postgresql.custom.conf.tmpl is configured to 250.

Also resulting from this: does your system really have 269 * 128 MB + 8192 MB = 42 GB of memory?

See #330

Eric64534 commented 10 months ago

Thank you so much for the reply! Yes, this system has an Intel i9-10900X (20 threads) and 64 GB of RAM. I'll try again with less threads.

Best, Eric

Eric64534 commented 10 months ago

Well, I got a little further. Now it says I ran out of disk space. The osm-data volume resides on an external USB3-to-SATA adapter with a 1.9TB SSD. With everything idle now (after import aborted), the volume says it is 926 GB and windows says the disk (e:) has 760 GB free. My boot drive (c:) has 500 GB free. Any assistance would be greatly appreciated!

tech@4YF0GT3:/mnt/c/Users/ehughes$ docker run \ -v /mnt/e/planet-231002.osm.pbf:/data/region.osm.pbf \ -v osm-data:/data/database/ \ -e "FLAT_NODES=enabled" \ -e "OSM2PGSQL_EXTRA_ARGS=-C 8192" \ -e THREADS=10 \ overv/openstreetmap-tile-server \ import

shared_buffers = 128MB min_wal_size = 1GB

max_wal_size = 2GB # Overridden below

maintenance_work_mem = 256MB

Suggested settings from

https://github.com/openstreetmap/chef/blob/master/roles/tile.rb#L38-L45

max_connections = 250 temp_buffers = 32MB work_mem = 128MB wal_buffers = 1024kB wal_writer_delay = 500ms commit_delay = 10000

checkpoint_segments = 60 # unrecognized in psql 10.7.1

max_wal_size = 2880MB random_page_cost = 1.1 track_activity_query_size = 16384 autovacuum_vacuum_scale_factor = 0.05 autovacuum_analyze_scale_factor = 0.02

listen_addresses = '*' autovacuum = on

Istador commented 10 months ago

If I remember correctly, the docker settings on Windows might impose a maximum volume size limit? IDK.

Likely it cleaned up the files from the step it was at after the error happened and that is why there is more free space now than when the error occured. According to the logs the data import took 3 days, and it failed 3 hours later when creating the indicies for the planet_osm_ways table.

According to https://wiki.openstreetmap.org/wiki/Planet.osm the uncompressed data is 1.8 TB, so 1.9 TB for the whole planet might not be enough? But the import doesn't import all the data, so it should be less, though the database also needs space for its own and for indicies and stuff. I think 1.9 TB should be enough if there is nothing else on the disk. What I read from about two years ago the size was about 1.2 TB imported.

Eric64534 commented 9 months ago

Just to follow-up, the issue was indeed Windows (and/or docker) imposing a volume size limit of 1TB. I tried all sorts of things to increase it, but nothing worked. I finally gave up and moved to Ubuntu and everything worked. Really appreciate your help!!