ugns / sks-docker

OpenPGP SKS Key Server Docker container build
11 stars 2 forks source link

Failure while binding socket... #1

Open larsw opened 6 years ago

larsw commented 6 years ago

Any idea what this is caused by? I've run sks-init on the volume bound to /var/lib/sks ...

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] 01-sks-data-dir: applying...
[fix-attrs.d] 01-sks-data-dir: exited 0.
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] done.
[services.d] starting services
[services.d] done.
db_archive: BDB1566 DB_ENV->log_archive interface requires an environment configured for the logging subsystem
db_archive: DB_ENV->log_archive: Invalid argument
Fatal error: exception Failure("Failure while binding socket.  Probably another socket bound to this address")
db_archive: /var/lib/sks/PTree: No such file or directory
db_archive: BDB0061 PANIC: No such file or directory
db_archive: BDB1544 process-private: unable to find environment
db_archive: /var/lib/sks/PTree: No such file or directory
db_archive: DB_ENV->open: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
2018-06-17 12:19:47 sks_db, SKS version 1.1.6
2018-06-17 12:19:47 Using BerkelyDB version 5.3.28
2018-06-17 12:19:47 Copyright Yaron Minsky 2002, 2003, 2004
2018-06-17 12:19:47 Licensed under GPL. See LICENSE file for details
2018-06-17 12:19:47 http port: 11371
Fatal error: exception Failure("Failure while binding socket.  Probably another socket bound to this address")
Fatal error: exception Failure("Failure while binding socket.  Probably another socket bound to this address")
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] syncing disks.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.
jbouse commented 3 years ago

Sorry for the late response but I just noticed this issue. I'm working on a revision to the image currently that should hopefully remove the need to run the sks-init script as the new ENTRYPOINT for the image will automatically attempt to import the key dump contents if the key database (/var/lib/sks/KDB) doesn't and there is an available key dump available under /data/dump. My decision to read the dump from outside of the /var/lib/sks base directory is in an attempt to keep from having SKS open all the files when launching after importing. I'm doing a lot of testing on this currently and have not yet pushed the changes up to the repository. When I do the jtbouse/sks image on Docker Hub will also be updated.

As to the errors your log is displaying, I am not precisely sure to the cause but I had been having some database corruption issues with my own SKS keyserver running the image so I can't rule out the possibility it is related to that. The sks-log-clean service does attempt to run db_archive against the key and ptree databases on starup and as uploaded every 7 days though I'm changing that to every 2 hours to try and keep the BDB logs from getting too out of control as I saw them doing sometimes.

Another side effect of the way I had this image startup sequenced the s6 supervision was attempting to perform it's tasks before forking off the shell to execute the sks-init so it's possible this is where the conflict is coming from. The new ENTRYPOINT should remediate this situation as well as providing any options to the command will simply in effect ignore the s6 supervision.