Closed DrDanke closed 6 years ago
Hi @DrDanke,
What osm2pgrouting version are you using?
Thanks!
This is osm2pgrouting Version 2.3.6 psql (9.6.7)
It is really strange for me. it seems as if there is a problem with /etc/hosts
and PostgreSQL can't resolve localhost. Did you review after this fail if /etc/hosts
is unchanged and Ok? Are you running in Linux or inside Linux Containers (i.e. Docker)?
I'm running it in Debian, but inside screen
command, since the process takes so long.
my hosts configuration:
# nameserver config
# IPv4
127.0.0.1 localhost.localdomain localhost
#
# IPv6
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
Hi @DrDanke ,
Why do you have localhost
in third position?
The standard way in most Linux distros to set localhost in /etc/hosts would be (first is IP_address, second is host_name and others are aliases):
127.0.0.1 localhost localhost.localdomain
or only this:
127.0.0.1 localhost
Anyway your problem is maybe another thing. Is it possible in your point of error that PostgreSQL could be lost read access to /etc/hosts (I don't know why.... memory, disk space, )?
So, what I tried now is running as sudo
and also changing the hosts file to 127.0.0.1 localhost localhost.localdomain
but I'm still running into the same error.
What I find interesting is that the error appears after 2000000 wys / vertices. What's happening from the application perspective at this point? It looks like a defined break point for something.
There is no limit in the number of ways or vertices processed by osm2pgrouting. There is a parameter to set the chunk size of elements processed. I'm thinking about what would happen if you don't use the default chunk size (20000) and you use a bigger one (i.e. 100000). You need to use --chunk
arg with a size.
Finally it worked with bigger chunk size. I used the following command:
osm2pgrouting --f geodata/germany-latest_reduced.osm --conf osm2pgrouting/mapconfig.xml --dbname routing --username postgres --chunk 10000000 --clean
Thanks @DrDanke . Very useful experience.
Hi, a few days ago I mentioned in the chat that I was encoutering a problem with the import of the whole germany-latest-data (highways only). I managed to parse the data without problems using swap (300GB). Unfortunately, I closed my ssh-window too fast for realizing that several errors appeared. I then dropped the table and re-run the whole import process. Basically I identified 2 problems:
It creates all necessary tables and also writes data, but I would like to better understand the possible impact of the issues mentioned above. Please see the logfile attached.
output.txt