osm2pgsql-dev / osm2pgsql

OpenStreetMap data to PostgreSQL converter
https://osm2pgsql.org
GNU General Public License v2.0
1.5k stars 474 forks source link

Some research on middle performance #2110

Open joto opened 11 months ago

joto commented 11 months ago

I did some research on how fast the import into the middle tables is and how much we could possible improve performance. For that I compared actual osm2pgsql runs with running COPY "manually".

All the numbers are based on a single run, so take them with a grain of salt.

All experiments were done with the new middle (--middle-database-format=new) and without flat node files, i.e. all nodes were imported into the database.

First I did an import with --slim -O null, i.e. without output tables to get the current baseline. Internally this will create the tables with a primary key constraint on the id column and then import the data using COPY. I looked only at the timings for that part, not at building extra indexes which happens later. Indexing will have to be done anyway and it happens completely in the database, so it is unlikely that we can do much about that part. I then dumped out the data in COPY format and re-created the same database by creating the tables and running COPY with psql. The time for this is the shortest time we can likely get, the difference between this time and the import time is the time needed to read and convert the OSM data, i.e. the necessary or unnecessary overhead generated by osm2pgsql.

I then tried some variations:

Here are the timings (in minutes):

PK nodes ways rels sum
osm2pgsql import yes 263 77 3 343 (5.7h)
COPY yes 241 78 3 322 (5.4h)
COPY FREEZE yes 196 65 2 263 (4.4h)
COPY no 142 60 2 204 (3.4h)
COPY FREEZE no 127 50 2 179 (3.0h)
add primary key 29 4 0 33 (0.6h)
COPY + add PK 171 64 2 237 (4.0h)
COPY FREEZE + add PK 156 54 2 212 (3.5h)

Some results from this research:

We also have to keep in mind that the situation is different if we use a flat nodes file. (And also different if we use the --extra-attributes option.)

And for real situations we have interaction between the middle and the output which I haven't looked at in detail so far. Most nodes don't have any tags, so they don't take up any time in the output code, the middle code is the bottleneck here. For the ways this situation is reversed, the middle is reasonably simple, the output runs some Lua code for basically every way, which is almost certainly the bottleneck.

pnorman commented 11 months ago
  • From the numbers here it seems to be significantly faster to create the table without the primary key constraint and add that later. But in another test where I also generated output tables, this did not make a difference. The reason is probably that osm2pgsql was busy so often doing other things, that PostgreSQL had the time to update the indexes while the import ran. So it might be possible to get some improvement here in a real situation, but it is not quite as clear-cut as the numbers here suggest.

We should move to creating the UNIQUE index after loading the data. It might not have sped up your test, but I believe it would on some hardware with a different number of threads. Additionally, the resulting index is properly balanced without dead tuples in it.

  • It looks like using COPY FREEZE can improve the performance. For this to work we have to create the table in the same transaction as we do the COPY. This is not easily possible with the current code, but it is a change we could do.

This would rule out ever having multiple threads writing to the middle at the same time. Do we want to do that?

rouen-sk commented 11 months ago

I believe using postgres as node cache was bad choice in the first place, and minor optimizations wont "save us" here. Postgres has way too much overhead for this purpose, in terms of time, storage space and IOPS. Simple high-performance key-value store library, such as LevelDB, would outperform it significantly.

For example, Imposm is using LevelDB for node cache, and on my machine (cloud VM, so poor IOPS) building node cache takes about 1 hour, and 160 GB of disk space. osm2pgsql slim mode (with 50GB RAM cache) takes several hours, and I honestly don't know how much disk space, because I tried the latest planet yesterday, and it crashed on low disk space, with about 550 GB free at the start.

joto commented 11 months ago

@rouen-sk I agree and we are taking steps toward doing something like that. But it is a) a far bigger project than anything suggested here, b) does not work for all use cases, because for some you need that data in the database. So improving what we have is still an issue. Lets stick to that discussion here.

See also https://osm2pgsql.org/contribute/project-ideas.html#non-postgresql-middle

mboeringa commented 11 months ago

I believe using postgres as node cache was bad choice in the first place, and minor optimizations wont "save us" here. Postgres has way too much overhead for this purpose, in terms of time, storage space and IOPS. Simple high-performance key-value store library, such as LevelDB, would outperform it significantly.

For example, Imposm is using LevelDB for node cache, and on my machine (cloud VM, so poor IOPS) building node cache takes about 1 hour, and 160 GB of disk space. osm2pgsql slim mode (with 50GB RAM cache) takes several hours, and I honestly don't know how much disk space, because I tried the latest planet yesterday, and it crashed on low disk space, with about 550 GB free at the start.

@rouen-sk

You are aware of the --flat-nodes option that osm2pgsql has supported for years to store nodes outside the database in an efficient and fast way?

On my machine (non-cloud), it takes only 34m to import all +12B nodes of Facebook's Daylight distribution of OpenStreetMap using the --flat-nodes option, and the resulting file should be some 160GB as well. That still leaves the ways and relations stored in PostgreSQL for middle, but I am not sure if imposm doesn't store them as well in PostgreSQL?

550GB free space for a Planet import is becoming very tight. Planet is just to big nowadays, and honestly, with good 4TB NVMe PCIe 3-5.x drives now as low as maybe $200-300, shortage of disk space or IOPS should be no excuse for a failed import.