Open HuJK opened 6 years ago
Exactly the same issue here. The message tells it all. Looks like a vicious circle...
On my side, I am trying to run nginx as a service. There is an official nginx image on docker hub and nginx is started this way :
CMD ["nginx", "-g", "daemon off;"]
Does it means that Alpine completely relies on any kind of orchestrator to react to a service getting down?
About your specific case with ssh : I guess your goal is to connect to your container to look a what's going on while testing your Dockerfile instructions...
If this is the case, you can open a shell on your container from the same host (your workstation?) with this instruction : docker exec -it <container_id> sh
I have the same problem , and who can fix it? I'm puzzled by it for days!
@heavenkiller2018 What are you trying to do? Maybe, like me, you wanted to start several services on the same container : for example, an application server and a web server on the top of it.
But I have learnt this is not the right way to do it. Actually, I realized it while writing my last post on this thread : indeed, Alpine relies on some kind of orchestrator to manage the resilience of your services.
The right design would be to dedicate one container for each service. This is why Alpine does not provide usable classical 'services'.
Using containers implies a slightly different architecture of the deployment of your application.
While testing on your desktop, you could use docker-compose to build and run your containers together. But, finally, what you want is to deploy your application on a production environment. And this is where container application platforms like Amazon ECS/EKS or RedHat Openshift should be used. Once set up, your execution environment on those plateforms will take care of the health of you app (killing unhealthy containers and restart new ones) and scale up or down (instantiate new hosts to host new containers to respond to the increase of load). This not the classical daemon managed by the classical linux service which manage your app or web server, but the container platform. Therefore, dedicating a container per service is a more appropriate design.
I have the same problem too 🤷♂️
any solution?
add this in your Dockerfile:
VOLUME [ “/sys/fs/cgroup” ]
or add volume like this when run image:
docker run -v /sys/fs/cgroup your_image
@Veitor Great, thank you
To enable Secondary Services beside the Main Service I used the entrypoint.sh
as documented at:
Docker Entrypoint Script
In the Dockerfile
I needed to do the following Setup:
(This is how to launch the "rsyslog" Service for a Web Service with "Apache")
RUN apk add util-linux openrc
VOLUME /sys/fs/cgroup # As suggested above
RUN rc-update add rsyslog default\
&& mkdir /run/openrc\
&& touch /run/openrc/softlevel # Workaround for the Error Message
COPY config/docker/entrypoint.sh /usr/local/bin/
RUN chmod u+x,g+x /usr/local/bin/entrypoint.sh\
&& ln -s /usr/local/bin/entrypoint.sh / # backwards compat
ADD html/ /var/www/html/
WORKDIR /var/www/html/
ENTRYPOINT ["entrypoint.sh"]
CMD ["httpd", "-DNO_DETACH", "-DFOREGROUND", "-e", "info"]
In the entrypoint.sh
Script I first need to first call the rc-status
command to be able to start the "rsyslog" Service later:
#!/bin/sh
set -e
echo "Service 'All': Status"
rc-status -a
echo "Service 'RSyslog': Starting ..."
rc-service rsyslog start
if [ "$1" = 'httpd' ]; then
echo "Command: '$@'"
echo "Service '$1': Launching ..."
fi
exec $@
At Container Launch Time It produces the following Output:
php_web | Service 'All': Status
php_web | Service `hwdrivers' needs non existent service `dev'
php_web | * Caching service dependencies ... [ ok ]
php_web | Runlevel: boot
php_web | Runlevel: default
php_web | rsyslog [ stopped ]
php_web | Runlevel: nonetwork
php_web | Runlevel: shutdown
php_web | Runlevel: sysinit
php_web | Dynamic Runlevel: hotplugged
php_web | Dynamic Runlevel: needed/wanted
php_web | Dynamic Runlevel: manual
php_web | Service 'RSyslog': Starting ...
php_web | * Starting rsyslog ... [ ok ]
php_web | Command: 'httpd -DNO_DETACH -DFOREGROUND -e info'
php_web | Service 'httpd': Launching ...
However, if the Container was previously launched with docker-compose up
or docker-compose restart
in Foreground Mode and then stopped with ^C
the Service enters always in crashed
state and cannot be started.
I need to do the Sequence docker-compose down ; docker-compose up -d
to get the Secondary Service working.
Like seen here:
php_web | Service 'All': Status
php_web | Runlevel: boot
php_web | Runlevel: default
php_web | rsyslog [ crashed ]
php_web | Runlevel: nonetwork
php_web | Runlevel: shutdown
php_web | Runlevel: sysinit
php_web | Dynamic Runlevel: hotplugged
php_web | Dynamic Runlevel: needed/wanted
php_web | Dynamic Runlevel: manual
php_web | rsyslog [ crashed ]
php_web | Service 'RSyslog': Starting ...
php_web | * WARNING: rsyslog has already been started
php_web | Command: 'httpd -DNO_DETACH -DFOREGROUND -e info'
php_web | Service 'httpd': Launching ...
The Main Service is running fine but the "rsyslog" Service stays dead.
I wondered what might contain the required /run/openrc/softlevel
file
with adding to entrypoint.sh
echo "Service 'All': Status"
rc-status -a
echo "Softlevel File:"
echo "'$(cat /run/openrc/softlevel)'"
It shows at Launch Time:
Service 'All': Status
* Caching service dependencies ...
Service `hwdrivers' needs non existent service `dev' [ ok ]
Runlevel: boot
Runlevel: default
rsyslog [ stopped ]
Runlevel: nonetwork
Runlevel: shutdown
Runlevel: sysinit
Dynamic Runlevel: hotplugged
Dynamic Runlevel: needed/wanted
Dynamic Runlevel: manual
Softlevel File:
''
The required file does actually contain nothing.
This dockerfile is working. Weirdly I need to run rc-status first.
FROM alpine
EXPOSE 22
RUN apk update \
&& apk add --no-cache openssh-server openrc git rsync \
&& mkdir -p /run/openrc \
&& touch /run/openrc/softlevel \
&& mkdir /repos /repos-backup \
&& sed -ie "s/#PubkeyAuthentication/PubkeyAuthentication/g" /etc/ssh/sshd_config \
&& sed -ie "s/#PasswordAuthentication yes/PasswordAuthentication no/g" /etc/ssh/sshd_config \
&& echo "0 5 * * * cd /repos ;for i in $(ls); do echo -n '$i : ' ;git -C $i pull 2>/dev/null ;done" > /etc/crontabs/root \
&& echo "30 5 * * * rsync -qr /repos/* /repos-backup" > /etc/crontabs/root
ENTRYPOINT ["sh","-c", "rc-status; rc-service sshd start; crond -f"]
To enable Secondary Services beside the Main Service I used the
entrypoint.sh
as documented at: Docker Entrypoint Script In theDockerfile
I needed to do the following Setup: (This is how to launch the "rsyslog" Service for a Web Service with "Apache")RUN apk add util-linux openrc VOLUME /sys/fs/cgroup # As suggested above RUN rc-update add rsyslog default\ && mkdir /run/openrc\ && touch /run/openrc/softlevel # Workaround for the Error Message COPY config/docker/entrypoint.sh /usr/local/bin/ RUN chmod u+x,g+x /usr/local/bin/entrypoint.sh\ && ln -s /usr/local/bin/entrypoint.sh / # backwards compat ADD html/ /var/www/html/ WORKDIR /var/www/html/ ENTRYPOINT ["entrypoint.sh"] CMD ["httpd", "-DNO_DETACH", "-DFOREGROUND", "-e", "info"]
In the
entrypoint.sh
Script I first need to first call therc-status
command to be able to start the "rsyslog" Service later:#!/bin/sh set -e echo "Service 'All': Status" rc-status -a echo "Service 'RSyslog': Starting ..." rc-service rsyslog start if [ "$1" = 'httpd' ]; then echo "Command: '$@'" echo "Service '$1': Launching ..." fi exec $@
At Container Launch Time It produces the following Output:
php_web | Service 'All': Status php_web | Service `hwdrivers' needs non existent service `dev' php_web | * Caching service dependencies ... [ ok ] php_web | Runlevel: boot php_web | Runlevel: default php_web | rsyslog [ stopped ] php_web | Runlevel: nonetwork php_web | Runlevel: shutdown php_web | Runlevel: sysinit php_web | Dynamic Runlevel: hotplugged php_web | Dynamic Runlevel: needed/wanted php_web | Dynamic Runlevel: manual php_web | Service 'RSyslog': Starting ... php_web | * Starting rsyslog ... [ ok ] php_web | Command: 'httpd -DNO_DETACH -DFOREGROUND -e info' php_web | Service 'httpd': Launching ...
However, if the Container was previously launched with
docker-compose up
ordocker-compose restart
in Foreground Mode and then stopped with^C
the Service enters always incrashed
state and cannot be started. I need to do the Sequencedocker-compose down ; docker-compose up -d
to get the Secondary Service working. Like seen here:php_web | Service 'All': Status php_web | Runlevel: boot php_web | Runlevel: default php_web | rsyslog [ crashed ] php_web | Runlevel: nonetwork php_web | Runlevel: shutdown php_web | Runlevel: sysinit php_web | Dynamic Runlevel: hotplugged php_web | Dynamic Runlevel: needed/wanted php_web | Dynamic Runlevel: manual php_web | rsyslog [ crashed ] php_web | Service 'RSyslog': Starting ... php_web | * WARNING: rsyslog has already been started php_web | Command: 'httpd -DNO_DETACH -DFOREGROUND -e info' php_web | Service 'httpd': Launching ...
The Main Service is running fine but the "rsyslog" Service stays dead.
This works fine to my alpine 3.14 and 3.15, after also adjusts. Thank you. Very.
This dockerfile is working. Weirdly I need to run rc-status first.
FROM alpine EXPOSE 22 RUN apk update \ && apk add --no-cache openssh-server openrc git rsync \ && mkdir -p /run/openrc \ && touch /run/openrc/softlevel \ && mkdir /repos /repos-backup \ && sed -ie "s/#PubkeyAuthentication/PubkeyAuthentication/g" /etc/ssh/sshd_config \ && sed -ie "s/#PasswordAuthentication yes/PasswordAuthentication no/g" /etc/ssh/sshd_config \ && echo "0 5 * * * cd /repos ;for i in $(ls); do echo -n '$i : ' ;git -C $i pull 2>/dev/null ;done" > /etc/crontabs/root \ && echo "30 5 * * * rsync -qr /repos/* /repos-backup" > /etc/crontabs/root ENTRYPOINT ["sh","-c", "rc-status; rc-service sshd start; crond -f"]
thanks @finzzz. This happened to me as well. Run rc-status solved the problem. (as you mentioned, it's weird)
2y later, I had the same problem and it took a couple of hours to get to this Dockerfile
:
FROM alpine
RUN apk update \
&& apk add --no-cache \
openssh-server \
#TODO: Remove next line if no ssh client needed to reduce attack surface
openssh \
&& sed -ie "s/#PubkeyAuthentication/PubkeyAuthentication/g" /etc/ssh/sshd_config \
&& sed -ie "s/#PasswordAuthentication yes/PasswordAuthentication no/g" /etc/ssh/sshd_config
RUN ssh-keygen -A
RUN adduser -D app
RUN chown app /etc/ssh/ssh_host_*
RUN touch /run/sshd.pid && chown app /run/sshd.pid
USER app
RUN ssh-keygen -t rsa -q -f "$HOME/.ssh/id_rsa" -N "" && \
cp /home/app/.ssh/id_rsa.pub /home/app/.ssh/authorized_keys
EXPOSE 2222
ENTRYPOINT ["sh", "-c","exec /usr/sbin/sshd -D -e"]
server:
$ docker run --rm -it -p 2222:22 ssh-demo:latest
Server listening on 0.0.0.0 port 22.
Server listening on :: port 22.
Accepted publickey for app from 127.0.0.1 port 51122 ssh2: RSA SHA256:p7b3zq3yQxY0ol4jK9bmDX7dJv7/yM/Fbdfl94sbmpk
Attempt to write login records by non-root user (aborting)
client:
$ docker exec -it mystifying_ritchie sh
/ $ ssh localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established.
ED25519 key fingerprint is SHA256:sld63Yre2TSy8+7hp8ZErHaWoy861mGrEfeAjJnnc7c.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'localhost' (ED25519) to the list of known hosts.
Welcome to Alpine!
The Alpine Wiki contains a large amount of how-to guides and general
information about administrating Alpine systems.
See <http://wiki.alpinelinux.org/>.
You can setup the system with the command: setup-alpine
You may change this message by editing /etc/motd.
e122cc8ec590:~$
Turns out you don't necessarily need the OpenRC, since sshd can run fine if you start it directly. This minimizes the potential attack surface for the container.
Similar with https://github.com/gliderlabs/docker-alpine/issues/183 , but still not work.