Closed hodossy closed 1 year ago
I looked up a bit, this probably means, that the tilemaker
container has been killed due to OOM. I think this project is genius, and we will definitely incorporate it in our operations. The ability to resume is a huge plus. Thanks for creating this! I would like to summarize my experience below, it may help take a step towards the stated goal of the project.
This program aims to be a simple set and forget, one liner which gives anyone - a way to get a full-featured and bang up to date set of vector tiles for the entire planet on small hardware.
So here it is: The auto ram setting decided it can take up to 5950 MB memory, so I have manually lowered that to 4096 MB, but still, I regularly saw memory consumption of the tilemaker
process well over 5 GB. Fortunately I managed to add a 8 more GB of RAM to the instance (now total of 16 GB), so I managed to finish the process. I observed a max memory usage of 10.3 GB for an input size of ~190 MB (cut size was determined at 273 MB). That file contained over 43 million tiles, and I think that is reason for the memory requirements. I am wondering if it is possible to take into consideration the number of tiles in a slice, when determining if it requires further slicing.
There were also minor annoyances like when run without a pbf file given, and if there is an error, usually after the first re-run, the planet osm file is removed (considered corrupt for some reason), thus it needs to be downloaded (and possibly sliced) again. So I quickly reverted to a config file, and I downloaded the planet file myself (I got the url from the code). I also noticed that the shape files are always unzipped, seems unnecessary.
A good improvement would be if slicing could be continued from a previously complete state. So for example in my case, after slicing completed for the 5950 MB RAM limit, I had to rerun the whole process, but it would have been much more efficient to just continue with the already present slices.
Also there was an issue, that a journal file has been left in the mbtiles directory, and that caused tile-join to fail, so I had to rerun the whole joining again.
I am closing this issue, as it is not a bug, but I see several improvements here (I have never written a single line in go):
planet.osm.pbf
automatically (when no pbf file is given), maybe prompt the user before doing sotile-join
is only run on *.mbtiles
files only, no journal file should be passed to the commandPlease let me know what you think about these ideas!
Describe the bug
Generating one of the temporary mbtiles dies with exist status 137
To Reproduce
Just run
./sequentially-generate-planet-mbtiles--unix-amd64-v3.1.0
, default settings, whole planet.Expected behavior
Generation succeeds.
Screenshots
Output (excerpt):
Desktop (please complete the following information):
Ubuntu 20.04.5 LTS (GNU/Linux 5.4.0-135-generic x86_64) with 8 GB RAM, 8 cores and 512 GB storage