wiktorn / Overpass-API

Overpass API docker image
MIT License
133 stars 47 forks source link

No space left on device - Entire world #86

Closed ShakeyJake1 closed 9 months ago

ShakeyJake1 commented 2 years ago

I am struggling to get docker to work on the whole world. I have used the code shown in the example for “Overpass clone covering the whole world” but am running into an issue with running out of space after around 10 -13GB has been downloaded. I get the following error;

“Cannot write to '/db/db/nodes.bin.idx' (No space left on device).”

I have attempted to run it several times, each time it seems to stop at a similar point with the same result. I am running Docker Desktop 4.4.3 (72729) on windows 10 pro on a 1TB ssd with over 750 gb free. I have tried several different things to get it to work without success. Between each attempt I used ‘docker system prune --all –force’ so have no other containers taking up space.

I first attempted to use WSL 2. I have changed my Docker daemon to increase the space allocation to 300gb. I have checked the size allocated to the image file (ext4.vhdx) and set this to 300gb as well which has not made a difference. Each time I tried this it stopped after around 12.5GB was downloaded.

I tried using Hyper-V instead of WSL2 and adjusted the settings in docker to 8cpus, 20GBMemory, 4GB swap, 500GB Disk image. Using this setup it stopped after around 10GB was downloaded (/db/db/nodes.bin 20%[=========>] 10.14G 1.27MB/s in 2h 7m)

I wondered if this related to the inodes running out that I have seen in other posts elsewhere so I used the following to check. This wouldn't let me see the result after I saw the issue 'No space left on device'. So I attempted to use it while downloading from scratch and it looks like I had plenty of inodes while the download was progressing.

docker run -it --privileged --pid=host justincormack/nsenter1 /bin/df -i /var/lib

I also used the command below to confirm that the drive was actually set to 600GB which it was and it was showing plenty of room left.

docker run -it --privileged --pid=host justincormack/nsenter1 /bin/df -h /var/lib

Any suggestions on what I can do to try and resolve the issue would be great. As a side note I have got the process running for a smaller part of the world. Just not when it is such a large amount of data.

ShakeyJake1 commented 2 years ago

I have been continuing to try a workaround to get this working on the entire world. I thought that if I download the osm.bz2 file first and use it locally it would at least allow me to test things out a bit quicker. I managed to get this to work on for https://download.geofabrik.de/australia-oceania-latest.osm.bz2 but I ran into trouble doing it for the entire world. I get the following error; image

I get a similar error message for other parts of the world where the size of the .bz2 file is above 10GB. The image below was when I attempted to run asia-latest.osm.bz2

image

What is interesting in both images it shows the same amount of data received at 9.7GB. I'm not sure what is limiting it to this number. I'm not sure if this error relates to the same issue that I have seen in my previous post but thought it worth mentioning.

The steps I ran through were to run a local .bz2 file were;

Download https://planet.openstreetmap.org/planet/planet-latest.osm.bz2 and store it locally

Create a container using the following code docker container create -e OVERPASS_META=yes -e OVERPASS_MODE=init -e OVERPASS_PLANET_URL=file:///planet-latest.osm.bz2 -e OVERPASS_DIFF_URL=https://planet.openstreetmap.org/replication/minute/ -e OVERPASS_RULES_LOAD=10 -v /big/docker/overpass_db/:/db -p 12345:80 -i -t --name overpass_world wiktorn/overpass-api

Copy the local .bz2 file to the container using the code docker cp c:\users\jacob\documents\osm_downloads\planet-latest.osm.bz2 overpass_world:/planet-latest.osm.bz2

Run the container from docker desktop application.

The fact that I have been able to get this process to work for smaller parts of the world has made me think my issues might be something to do with the number of resources available in the container. How much would people expect to allocate the image for the entire world? I thought 600GB would be plenty as the .bz2 file for the entire planet is only around 120GB.

ShakeyJake1 commented 2 years ago

I tried a few more things to get the code to work using the OVERPASS_MODE=init method, including adding OVERPASS_FLUSH_SIZE=1 and turning off the swap data without getting it to work. I'm not quite sure what OVERPASS_FLUSH_SIZe does but it was suggested for other issues with resource limitations. From going through some of the other closed issues it sounds like overpass API is resource-hungry during import /load. I think that might be the root cause of my issue. I had assumed 20GB would have been enough ram. Is there anyone who can confirm if they have been able to use the init mode for the entire world and how much ram they needed to get it to work?

However, I have got an alternative method to work. I used the method shown in issue 48 which clones the entire world. It did take me a few attempts to get this to work. It turns out that it clones data from . This location contains cloned files for the current day and the previous day with the data refreshing around 1am Germany time. The issue I ran into was that the files it was cloning from disappeared during my download process. The docker clone process seems to only use the previous days clone so there is only around 24hours to complete the download of around 120GB before those files are no longer available. I ended up timing the start of the download just after the cloned images were updated to give it the maximum possible time to be able to complete the download.

I am going to keep this ticket open for now as I would like to see if there are any other suggestions to get the init method to work on the full planet.

tylercubell commented 7 months ago

It sounds like it was a download timing issue in your case but for anyone else running into the Cannot write to '/db/db/nodes.bin.idx' (No space left on device). issue using Docker Desktop with WSL2 backend, try increasing the size of the Docker disk image with the method described in this Stackoverflow answer.