mediagis / nominatim-docker

100% working container for Nominatim
Creative Commons Zero v1.0 Universal
1.07k stars 437 forks source link

WARNING: terminating connection because of crash of another server process #449

Open MThurm opened 1 year ago

MThurm commented 1 year ago

Hi,

I used this prompt: docker run -it -e PBF_URL=https://download.geofabrik.de/europe-latest.osm.pbf -e REPLICATION_URL=https://download.geofabrik.de/europe-updates/ -e NOMINATIM_FLATNODE_FILE=/nominatim/flatnode.file -e THREADS=8 -e POSTGRES_SHARED_BUFFERS=4GB -e POSTGRES_MAINTAINENCE_WORK_MEM=4GB -e POSTGRES_AUTOVACUUM_WORK_MEM=4GB -e POSTGRES_WORK_MEM=1GB -e POSTGRES_EFFECTIVE_CACHE_SIZE=1GB -e POSTGRES_SYNCHRONOUS_COMMIT=off -e POSTGRES_MAX_WAL_SIZE=1GB -e POSTGRES_CHECKPOINT_TIMEOUT=20min -e POSTGRES_CHECKPOINT_COMPLETITION_TARGET=0.9 -p 8080:8080 --name nominatim mediagis/nominatim:4.2

and get the following error: (its the second time, after days)

2023-05-16 00:47:34  ERROR: SQL command failed: EXECUTE get_way_list({80726707,112317769,104007912,80726709,104007900,31567708,173491720,157687661,29226236,1121140575,931776901,1121140576,22144001,931776905,931776906,1152285707,39675059,667550650,31302785})
2023-05-16 00:47:39  ERROR: DB copy thread failed: Ending COPY mode for 'place' failed: ERROR:  could not extend file "base/16386/19948": wrote only 4096 of 8192 bytes at block 53524
HINT:  Check free disk space.
CONTEXT:  COPY place, line 151887372
.
Traceback (most recent call last):
  File "/usr/local/bin/nominatim", line 14, in <module>
    exit(cli.nominatim(module_dir='/usr/local/lib/nominatim/module',
  File "/usr/local/lib/nominatim/lib-python/nominatim/cli.py", line 264, in nominatim
    return parser.run(**kwargs)
  File "/usr/local/lib/nominatim/lib-python/nominatim/cli.py", line 126, in run
    return args.command.run(args)
  File "/usr/local/lib/nominatim/lib-python/nominatim/clicmd/setup.py", line 94, in run
    database_import.import_osm_data(files,
  File "/usr/local/lib/nominatim/lib-python/nominatim/tools/database_import.py", line 113, in import_osm_data
    run_osm2pgsql(options)
  File "/usr/local/lib/nominatim/lib-python/nominatim/tools/exec_utils.py", line 159, in run_osm2pgsql
    subprocess.run(cmd, cwd=options.get('cwd', '.'),
  File "/usr/lib/python3.10/subprocess.py", line 524, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/usr/local/lib/nominatim/osm2pgsql', '--hstore', '--latlon', '--slim', '--log-progress', 'true', '--number-processes', '1', '--cache', '0', '--style', '/usr/local/etc/nominatim/import-full.style', '--output', 'gazetteer', '--create', '--flat-nodes', '/nominatim/flatnode.file', '--with-forward-dependencies', 'false', '/nominatim/data.osm.pbf']' returned non-zero exit status 2

Its a 2 TB HDD, i3-8100 and 24Gb Ram maschine. Please help :/

MFG

PS: Sorry for bad english

mtmail commented 1 year ago

HINT: Check free disk space.

Have you checked the free disk space?

MThurm commented 1 year ago

image

image

Sure.

mtmail commented 1 year ago

I see. ext4.vhdx is an auto-expanding virtual volume and there's plenty of disk space available.

Can you check the postgresql server logfile for errors?

This part of the Nominatim import is not recoverable, you'd have to start the import from the beginning. Usually errors I've seen with Postgresql crashing is disk space, network issues, virtual machine instability (if network mounted for example) and hardware (faulty RAM hardware). All outside of the control of the Nominatim software, only trying to rerun the process might work.

mtmail commented 1 year ago

and get the following error: (its the second time, after days)

The osm2pgsql step should take a couple of hours. Slow harddrive (HDD instead of SSD/Nvme) is significantly slower. If this step already took days for you then expect the full import to take weeks.

MThurm commented 1 year ago

If this step already took days for you then expect the full import to take weeks.

That would not be a problem. I will try to install a 2TB NVME and run it again in the next days.

Thx for help

mtmail commented 1 year ago

500GB should be enough. The whole planet needs 800-900GB.

MThurm commented 1 year ago

Hi,

now I have a 1TB m2 NVME and there is a new Error: ERROR: DB copy thread failed: Ending COPY mode for 'planet_osm_ways' failed: ERROR: could not extend file "base/16386/19931.87": No space left on device HINT: Check free disk space. CONTEXT: COPY planet_osm_ways, line 383892342 .

I added --shm-size=2g to the prompt, same Error.

leonardehrenfried commented 1 year ago

"No space left on device" is pretty clear, isn't it? I would check how much space there is left on your partitions.

MThurm commented 1 year ago

Hi, I found a solution. The problem was Docker under Windows.

Disable "use the WSL 2 based engine" under settings -> General and increase the virtual Disk Limit to maximum.

MFG

MThurm commented 1 year ago

image

Hi, ignore my last post ... its not working... now I tried it in docker on ubuntu. Same Error ...

MThurm commented 1 year ago

After trying some settings the Terminal now runs 3 days and ended with:

2023-05-29 12:25:27: Done 1073294/1073294 in 1113 @ 964.215 per second - FINISHED postcodes (location_postcode)

2023-05-29 12:26:25: Post-process tables /app/init.sh: line 81: 104 Killed sudo -E -u nominatim nominatim import --osm-file $OSMFILE --threads $THREADS

What happened now? Please help me :(

MFG

mtmail commented 1 year ago

"Post-process tables" runs a lot of SQL queries. Check the postgresql logs for hints.

MThurm commented 1 year ago

The Docker container now stopped two times, when I restart it, I can see that size still rising. Is there a chance it still work? sqllog.txt dockerlog.txt

mo-aro-etailer commented 1 year ago

Hello, how did you solve this please?? I have the same problem

MThurm commented 1 year ago

Not at all. I now have it installed on Ubuntu without Docker.

thijndehaas commented 4 months ago

@MThurm So it does work on Ubuntu without Docker? I have the same problem with docker (on Debian). My 2TB free disk space doesn't seem enough. I already created a script that logs the remaining disk space every 30 seconds to look back. It seems like the container hits a limit around 975GB usage.