Closed michaelmattig closed 7 years ago
Can you give more information?
-read
or -write
part ? ]example-mapping.json
? -cachedir
?Here is my minimal example:
./imposm3 import -config config.json -read /home/michael/Downloads/europe-latest.osm.pbf
Monaco and Germany both worked, but the read time was below 5 minutes for those.
config.json:
{
"cachedir": "/data02/imposm",
"mapping": "mapping.json",
"connection": "postgis://osm:osm@localhost:5432/osm2",
"srid": 4326
}
I am using the shipped mapping.json.
This is the size of the cache dir. I tried different locations on different drives.
16K /data02/imposm/relations
13G /data02/imposm/coords
6.3G /data02/imposm/ways
16K /data02/imposm/inserted_ways
2.0G /data02/imposm/nodes
21G /data02/imposm/
I get the attached error using imposm3-0.3.0dev-20170119-353bc5d-linux-x86-64 imposm_error_0.3.txt
It also failed using imposm3-0.2.0dev-20161012-ee59cf2-linux-x86-64 imposm_error_0.2.txt
Using imposm3-0.1dev-20160128-bb3d003-linux-x86-64.tar.gz succeeded:
[Jan 30 18:09:31] [INFO] Reading OSM data took: 10m15.316631991s
[Jan 30 18:09:31] [INFO] Imposm took: 10m15.316821305s
If try to import the whole planet I also get an error using 0.1dev:
fatal error: unexpected signal during runtime execution
[signal 0x7 code=0x2 addr=0x7f66ba31b2b3 pc=0x7f67b361af6b]
runtime stack:
runtime.throw(0xa63800, 0x2a)
/usr/local/go/src/runtime/panic.go:527 +0x90
runtime.sigpanic()
/usr/local/go/src/runtime/sigpanic_unix.go:12 +0x5a
goroutine 11 [syscall, locked to thread]:
runtime.cgocall(0x407a56, 0xc82006ec50, 0x0)
/usr/local/go/src/runtime/cgocall.go:120 +0x11b fp=0xc82006ec20 sp=0xc82006ebf0
github.com/jmhodges/levigo._Cfunc_leveldb_write(0x2811ed0, 0x280eab0, 0x7f67240008c0, 0xc8d473a000)
github.com/jmhodges/levigo/_obj/_cgo_gotypes.go:807 +0x35 fp=0xc82006ec50 sp=0xc82006ec20
github.com/jmhodges/levigo.(*DB).Write(0xc82005c688, 0xc82005c690, 0xc82005c0c0, 0x0, 0x0)
/home/os/imposm/src/github.com/omniscale/imposm3/Godeps/_workspace/src/github.com/jmhodges/levigo/db.go:217 +0x73 fp=0xc82006ecb8 sp=0xc82006ec50
github.com/omniscale/imposm3/cache.(*WaysCache).PutWays(0xc8201c43c0, 0xc91d20c000, 0x1f40, 0x1f40, 0x0, 0x0)
/home/os/imposm/src/github.com/omniscale/imposm3/cache/ways.go:50 +0x1c4 fp=0xc82006ee50 sp=0xc82006ecb8
github.com/omniscale/imposm3/reader.ReadPbf.func3(0xc820017000, 0xc8201b70e0, 0xc8201b9290, 0xc8201b9250, 0xc8201bcf40, 0xc8201b8f30, 0xc8201b92a0)
/home/os/imposm/src/github.com/omniscale/imposm3/reader/reader.go:130 +0x41f fp=0xc82006ef78 sp=0xc82006ee50
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1696 +0x1 fp=0xc82006ef80 sp=0xc82006ef78
created by github.com/omniscale/imposm3/reader.ReadPbf
/home/os/imposm/src/github.com/omniscale/imposm3/reader/reader.go:138 +0x6cc
goroutine 1 [semacquire, 7 minutes]:
sync.runtime_Semacquire(0xc820012f3c)
/usr/local/go/src/runtime/sema.go:43 +0x26
sync.(*WaitGroup).Wait(0xc820012f30)
/usr/local/go/src/sync/waitgroup.go:126 +0xb4
github.com/omniscale/imposm3/parser/pbf.(*parser).Parse(0xc820012f00)
/home/os/imposm/src/github.com/omniscale/imposm3/parser/pbf/process.go:51 +0xc9
github.com/omniscale/imposm3/reader.ReadPbf(0xc8201bcf40, 0xc8201b8f30, 0xc820017000, 0xc8201c44b0, 0x0)
/home/os/imposm/src/github.com/omniscale/imposm3/reader/reader.go:241 +0x971
github.com/omniscale/imposm3/import_.Import()
/home/os/imposm/src/github.com/omniscale/imposm3/import_/import.go:120 +0xa87
github.com/omniscale/imposm3/cmd.Main(0xaab9b8)
/home/os/imposm/src/github.com/omniscale/imposm3/cmd/main.go:46 +0x2d2
main.main()
/home/os/imposm/src/github.com/omniscale/imposm3/imposm3.go:8 +0x23
goroutine 17 [syscall, 7 minutes, locked to thread]:
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:1696 +0x1
goroutine 5 [select]:
github.com/omniscale/imposm3/logging.(*LogBroker).loop(0xe600a0)
/home/os/imposm/src/github.com/omniscale/imposm3/logging/logger.go:143 +0x86e
created by github.com/omniscale/imposm3/logging.init.1
/home/os/imposm/src/github.com/omniscale/imposm3/logging/logger.go:240 +0x17a
and some other errors:
*** Error in `./imposm3': double free or corruption (!prev): 0x00007f6758200a80 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f67b35457e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x7fe0a)[0x7f67b354de0a]
/lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7f67b355198c]
/home/michael/osm/imposm3-0.1dev-20160128-bb3d003-linux-x86-64/lib/libhyperleveldb.so.0(+0x4b496)[0x7f67b3aff496]
/home/michael/osm/imposm3-0.1dev-20160128-bb3d003-linux-x86-64/lib/libhyperleveldb.so.0(_ZN7leveldb3log6Writer20EmitPhysicalRecordAtENS0_10RecordTypeEPKcmm+0xac)[0x7f67b3adffcc]
/home/michael/osm/imposm3-0.1dev-20160128-bb3d003-linux-x86-64/lib/libhyperleveldb.so.0(_ZN7leveldb3log6Writer9AddRecordERKNS_5SliceE+0xf5)[0x7f67b3ae01c5]
/home/michael/osm/imposm3-0.1dev-20160128-bb3d003-linux-x86-64/lib/libhyperleveldb.so.0(_ZN7leveldb6DBImpl5WriteERKNS_12WriteOptionsEPNS_10WriteBatchE+0x45f)[0x7f67b3ad291f]
/home/michael/osm/imposm3-0.1dev-20160128-bb3d003-linux-x86-64/lib/libhyperleveldb.so.0(leveldb_write+0x1d)[0x7f67b3acd7dd]
./imposm3(_cgo_c510a9a78dac_Cfunc_leveldb_write+0x3b)[0x407a91]
./imposm3[0x464d2a]
This is the size of the cache dir. I tried different locations on different drives
Thanks, so probably not a cachedir - disk space problem .. https://imposm.org/docs/imposm3/latest/tutorial.html#cache-files "Make sure that you have enough disk space for storing these cache files. The underlying LeveDB library will crash if it runs out of free space. 2-3 times the size of the PBF file is a good estimate for the cache size, even with -diff mode."
I am using the shipped mapping.json.
so mapping.json
= https://github.com/omniscale/imposm3/blob/master/example-mapping.json
and no extra, like load_all: true
"Make sure that you have enough disk space for storing these cache files. The underlying LeveDB library will crash if it runs out of free space. 2-3 times the size of the PBF file is a good estimate for the cache size, even with -diff mode."
That's not the problem, my disk has 11TB free space.
so mapping.json = https://github.com/omniscale/imposm3/blob/master/example-mapping.json
yes
and no extra, like load_all: true
yes
btw.: my machine has 40 cores, could that be a problem? Can I limit the number of threads for imposm?
This looks similar to #49. You should be able to work around this issue by compiling with LevelDB, instead of HyperLevelDB.
I'll gladly try, but how do I do this?
Btw: the README says to
go get github.com/omniscale/imposm3
go install github.com/omniscale/imposm3
but the second command does nothing and there is no bin folder in my GO workspace. Checking out the repository and running make imposm3
works though. But how do I configure the build with LevelDB? I only have libleveldb-dev
installed on my system so I guess it's already using that?
Sorry, the command moved to ./cmd/imposm3
. So it's go install github.com/omniscale/imposm3/cmd/imposm3
. I've just updated the README.
And yes, it should use the system installation of LevelDB.
(Only the packaging.sh script replaces LevelDB with HyperLevelDB)
Thank you for your help. It worked now with the manually compiled binary (using Go 1.7)
[Feb 2 14:39:13] [INFO] Reading OSM data took: 32m20.231478752s
I am trying to import the planet-latest.osm.pbf but I get a SIGABRT error after about 5 minutes. I tried different binary relases, and compiled imposm3 myself using go 1.6 and 1.7, but it doesn't work.
My machine is running Ubuntu 16 LTS and does not run out memory.
I attached the error output to this post. imposm_error.txt