Open gplv2 opened 11 years ago
This sounds like osm2pgrouting was killed by the OOM (out of memory) kernel process because the system ran out of memory and swap space. Try increasing the amount of swap space you have allocated, like 2X or 4X you physical memory.
Hey tx. The file to import is almost 9gig, that's only Benelux, been looking at the code , by the looks of it, it will try the load the file into memory in 1 go ... I can only image what a planet import would need... I'll try it though.
I was low on swap (linode VM) so I created a 9 gig swap file to cover for the source size. hope it's enough. I've been thinking ... Would it work to just split up the OSM source files. I guess without the -clean parameter this - in theory- would have to work if I account for the XML structure, no?
I'm actually at the next stage it seems right now, tx for the suggestion, as it turned out. Only thing I didn't try was a 'dmesg' , that tells me the memory is exhausted at one point.
This is a limitation of osm2pgrouting that has been there from the beginning. With additional swap space you can work around this limitation, though osm2pgrouting is definitely not doing well with large areas so far.
Instead of keeping everything in memory it would make sense to write to disk when running out of memory. Other tools that process OSM data do so as well. If someone would implement this, lot of users might appreciate.
Tried with plenty of swapspace, unfortunately the machine became unresponsive, except for a ping reply. Too bad, Might be not such a bad idea to look at how Routino does it, really fast way of indexing and splitting the jobs up in parts. Wish I could help but I'm a bad C-coder since age of 12 :)
here i do have the same problem..
Version 2.1, will also have the same problem, it will not be able to load in one call a continent, or a large coutry, but it will allow to make "incremental" additions to the database, so, for example to load a continent, maybe do it by loading each country into the same database.
loading by country works great.
Good to hear, it will remain as a feature request, because if the "incremental" works I need to figure out how to do "incremental" on one big file
Is there any update on this issue? I was just trying to import North America on a machine that has 64GB of memory and ran out. My osm file is 168GB in size and it sure would be nice to not have to try to do this piecemeal.
No update, but loading Canada, United States of Mexico and and USA separately or maybe by state/province
@amcirillo You can read more about how to solve your problem here:
MORE: @amcirillo You have 2 strategies:
FUTURE: https://github.com/pgRouting/osm2pgrouting/issues/245
On Ubuntu 13.04 - raring:
Importing benelux subset ends in killed, using up all memory (2gig machine). fyi: Nominatim import of pbf file processed fine with postgresql settings.
top memory right before kill:
host=127.0.0.1 user=postgres dbname=routing port=5432 connection success Trying to load config file /usr/local/src/mapconfig.xml Trying to parse config Trying to load data Trying to parse data Killed
How much memory would one actually need to process this?
postgresql error
strace says:
What can I do to help locate, reproduce and fix this? Any information you need? Things to try out ? Love to assist.