Open systemofapwne opened 8 months ago
Since this behavior is intentional, it is not a bug but a feature request. However, you are right that it has a specific negative impact on ioBroker Slaves. Therefore, I will check if we can safely modify this behavior, at least for Slave Containers. It might also be possible to link the "keep-alive function" of the container to the maintenance mode. Thank you very much for your suggestion for improvement.
Regards, André
The issue is not limited to satellite instances, since any instance, that somehow crashes, has this fate. So I propose, as you suggested, to rewire/fix the keep-alive functionality such that it works for all instances and not only for satellite ones.
It is not as easy as it sounds. As using the maintenance script is not mandatory yet, people still using the "hacky way" described in the docs (https://docs.buanet.de/iobroker-docker-image/docs/#iobroker-js-controller-core-updates) to update js-controller. So making the keep-alive dependent on the maintenance mode this will break the familiar way a lot of users use. Not sure if we should do this.
Calling it yourself "the hacky way" sounds like an ideal candidate for deprecation, especially since this is in this very containers documentation, that can easily be modified ;)
Anyhow, I propose two ideas:
Method1: One way would be making users aware about the maintenance mode via a shutdown message:
pkill -u iobroker # user is about to use the old 'hacky' way
# entrypoint.sh then shows the message below...
ioBroker has stopped working. If you intended to perform maintenance such as upgrading the js-controller, use "maintenance on" before and "maintenance off" after your intended action
This can be of course written in a better way - including links to the docs. This would nudge people towards using maintenance mode.
Method2: If you on the other hand would like to keep things the way they are (for now), I propose an ENV variable (e.g. IOB_MAINTENANCE_MODE_MANDATORY):
I think, the second approach would satisfy both of us, even though it wouldn't fix the underlying issue.
Calling it yourself "the hacky way" sounds like an ideal candidate for deprecation, especially since this is in this very containers documentation, that can easily be modified ;)
deprecation sounds good, but your are not the one answering the questions and issues of people not reading the docs or changelogs...
making users aware about the maintenance mode via a shutdown message
pkill -u iobroker
kills all processes running as user iobroker, so it might be quite a challenge to display a message in that situation...
I propose an ENV variable
No. I would not support the idea of another env for this. In the end the container has to stay administrable and supportable. Every new env makes it more complicated. It would be just a repetition of a short story when I added an env to deactivate the permission check at startup (because "it takes so long 😮") and getting issues a year later about permissions errors when running/ updating the container...
Your idea is stated here as enhancement. We will see.
Regards, André
Calling it yourself "the hacky way" sounds like an ideal candidate for deprecation, especially since this is in this very containers documentation, that can easily be modified ;)
deprecation sounds good, but your are not the one answering the questions and issues of people not reading the docs or changelogs...
making users aware about the maintenance mode via a shutdown message
pkill -u iobroker
kills all processes running as user iobroker, so it might be quite a challenge to display a message in that situation...
If you echo a message to /dev/pts/0, it would appear on a users screen, if he entered the container in interactive and terminal mode (docker exec -it
I propose an ENV variable
No. I would not support the idea of another env for this. In the end the container has to stay administrable and supportable. Every new env makes it more complicated. It would be just a repetition of a short story when I added an env to deactivate the permission check at startup (because "it takes so long 😮") and getting issues a year later about permissions errors when running/ updating the container...
I can understand that you want to keep complexity and workarounds as low a possible, which is a good thing IMHO.
Another method would be running iobroker via s6 and ensuring, that it keep running on a crash. But I yet have to come around with an idea on how to identify, if a user purposely "crashed" it via pkill
. Maybe by checking for the reason of the crash:
If it crashed due to missing db connections, just restart iobroker (accessible via the logs). But this all sounds a bit hacky too.
For the mean time, I wrote my own Docker container to mitigate crashes, since for an unknown reason, my satellite keeps on crashing randomly for the same reason all the time (sometimes it runs for weeks, sometimes just a few days or only hours).
FROM buanet/iobroker:latest
# WORKAROUND to disable maintenance mode keep-alive (+ reminder for myself, that I disabled it)
RUN sed -i '/^gosu iobroker tail/d' /opt/scripts/iobroker_startup.sh && echo 'if [ -c "/dev/pts/0" ]; then echo "ioBroker has stopped working. If you intended to perform maintenance such as upgrading the js-controller, use \"maintenance on\" before and \"maintenance off\" after your intended action" > /dev/pts/0; fi' >> /opt/scripts/iobroker_startup.sh
This basically modifyes the iobroker_startup.sh script such that instead of gosu iobroker tail -f /dev/null
it reads
echo "ioBroker has stopped working. If you intended to perform maintenance such as upgrading the js-controller, use \"maintenance on\" before and \"maintenance off\" after your intended action" > /dev/pts/0; fi'
Description / Beschreibung
When ioBroker crashes, the container keeps running. This is a side effect of maintenance mode, as one can see on this line. If the container keeps running, even though ioBroker crashed, the docker daemon will not automatically restart the container (if configured so). One should therefore either make sure, that ioBroker automatically restarts, when it crashes or make the container stop running in that case.
Backgroud: I have two ioBroker instances, my main instance running on my server (incl. redis for states and objects) and a satellite. When the satellite loses connection (restart of main server or loss of network), it times out and ioBroker simply stops running and leaves behind an "empty container".
Image version
v8.1.0
Docker logs / Docker Protokoll