Closed regnets closed 3 years ago
Ich habe hierfür eine Bounty auf Bountysource angelegt: https://www.bountysource.com/issues/88798894-raspberrymatic-docker-support-dockerhub-repository
Danke für das Enhancement Request Issue. Das gaze wird jedoch kein einfaches Unterfangen werden und ist nicht so einfach umzusetzen wie die integration als x86/ova variante. Ich lasse das Ticket aber trotzdem geöffnet falls hier noch mehr feedback dazu gegeben werden möchte oder sich jemand findet der daran arbeiten möchte.
Hätte auch großes Interesse an einem Docker-Image.
@jens-maus: Wo liegen deiner Meinung nach die größten Hürden im Vergleich zu einer ESXi / OVA Lösung? Eventuell kann ich mich demnächst damit beschäftigen.
@denisulmer Nun, um es "RaspberryMatic-like" bzw. "buildroot-like" zu machen müsste man erst einmal eine buildroot docker version generieren lassen und schauen zu welchen einschränkungen es da kommt bzw. was man da anpassen müsste um diese zu umgehen. Danach könnte man sich dann an die Portierung des OCCU teiles in RaspberryMatic machen. Eine schnelle Lösung wird es sicherlich nicht geben vermute ich. Alles etwas fleißarbeit.
Kann man nicht das ganze RaspberryMatic-Root-Image einfach in einem Docker-Image bereitstellen und das INIT-System im CMD ausführen? SystemScripte müssten dann ausgeklammert werden.
@mpietruschka Kannst du gerne mal probieren und entsprechend berichten. Ich vermute aber das das nicht einfach so out-of-the-box funktionieren wird und auch nicht die endlösung sein kann weil man das docker image ja sauber via buildroot erstellen lassen sollte. Aber wie gesagt, tobt euch aus und berichtet oder schickt nen pull request.
Wenn es rein nach der Docker-Philosophie geht, müsste jeder Dienst in einem gesondertem Container bereitgestellt werden. Das hat in meinen Augen nur Sinn, wenn diese Dienste auch gesondert geupdated werden. Das ist hier nicht zu erwarten.
Gibt es denn schon einen Docker Ansatz für RaspberryMatic? Und kann ich das reine rootfs für armv7 irgendwo herunterladen?
Ps. Gibt es einen Chat-/Austausch-Kanal? Dann müssten hier nicht alle Grundsatzfragen geklärt werden. Es sei denn, es ist gewünscht :)
Gruß, Marti
Also eine einzelne trennung der Dienste ergibt bei RaspberryMatic keinen Sinn. Das würde ich nicht angehen weil am schluss hat das sonst eh nix mehr mit RaspberryMatic zu tun. Ich hab aber schon einen groben plan wie man vorgehen könnte. Im Groben wird das wohl sowas werden wie das hier: https://github.com/AdvancedClimateSystems/docker-buildroot. D.h. man bastelt etwas um die buildroot umgebung von RaspberryMatic herum und lässt buildroot dann am schluss ein rootfs generieren was man dann in docker reinwerfen kann. Inwieweit man da auf widerstand stößst kann ich natürlich schwer abschätzen. Muss man einfach mal testen, etc.
Und wenn du das rootfs armv7 haben willst, lad halt das .img herunter und extrahier das rootfs. Oder du nutzt das -ccu3.tgz, da ist das rootfs als separate datei drin. Aber wie gesagt, ich halte es für keinen guten ansatz einfach das fertige rootfs von RaspberryMatic zu nehmen und dann das zu extrahieren was man braucht. Umgedreht sollte man das machen, die buildroot umgebung so anpassen das am schluss ein rootfs für die docker nutzung erzeugt wird das man dann direkt in docker importieren kann. Dann lässt sich das schön sauber in die CI umgebung von RaspberryMatic integrieren und man kann auch gleich automatisiert docker versionen für x86 mit ausliefern (wenn es denn klappt).
Ich glaube ich kann deinen Gedanken nachvollziehen: Ein zweckentfremdetes System-Build würde bei jeder Änderung ggf. zu einer Anpassung des Docker-Images führen. Stichwort langfristige Wartung.
Die Unterschiede stelle ich mir allerdings garnicht so groß vor (derzeit jedenfalls ;). Wenn es erst läuft wird es sicherlich einige Funktionen in der WebUI nicht geben können. Bspw.
Man müsste aber gut sehen können welche Anpassungen für Docker gemacht werden mussten. Darauf ließe sich dann ein dediziertes Buildroot anfangen. Ist das die Richtung in die du möchtest?
Angelnu hat bereits ein Docker-Image gebaut. Dabei holt er dein Repo und die Resourcen die er gebrauchen kann. Ist das so wie du es nicht haben wolltest?
In fact I have switched my main installation to RaspberryMatic since I love some of the extra features added by @jens-maus and I did not have the passion to keep backporting them all to the original CCU base.
So I would be able to generate a docker container out of a tarball containing the CCU filesystem. Then there are a few things I disable when running in a container that do not make sense or are not possible into a container (loading modules, configuring the HDMI, etc). I also do a one-time install of the pivccu device driver on the host to support all the HW devices that require extra modules (that must match the host kernel).
@jens-maus - if you agree I would propose to contribute my docker support into your project and I would discontinue my stand alone version: I do not really see much value on using the vanially CCU firmware... Let me know if you are interested.
btw: Ich spreche auch Deutsch aber für teschnische Sachen bin auf English besser vertraut.
@angelnu This sounds great and I would be indeed interested in your docker support stuff. In fact, it would be great to get at least your build scripts where you extract everything from the vanilla CCU firmware into a docker environment and get it somewhat ported over to RaspberryMatic so that we can perhaps add a mechanism that takes the final RaspberryMatic buildroot generated tar archive with the filesystem similar to the CCU and then extract all the docker relevant things and build the docker image in one run. For this we would then indeed also use GitHub Actions to get it more smoothly integrated.
Good, then let us get rolling @jens-maus :-)
In fact the build part is pretty simple: https://github.com/angelnu/docker-ccu/blob/master/Dockerfile
Most I do there is download and extract the original CCU and then get some of your patches so it would be "just" the lines after 52.
So some questions to get moving:
docker run raspberrymatic
independently of the platform.Hi,
all this sounds great. Did you made any progress the last 2 month? Is there something where I could help?
Martin
Not yet - I was hoping @jens-maus could chime in for the above questions - specially generating a tarball with his build is the pre-req to generate a docker image out of it.
Any progress? Added $10 on Bountysource. :)
@jens-maus - I have a few days to work on this before being back to work. The main question for me to start is if you want to produce tarballs from the buildroot (that I would pull in my project to build the docker image) or if you want to merge my docker steps in your project.
Since I am not so familiar with your project structure I would start with "just" copying the intermediate tarball from a local build in my project and progress to upload a docker image for testing.
If possible I would of course prefer to merge your work into this repository! So please go this road.
Good - it is also my preferred option since I personally use RaspberryMatic and therefore I am not able to give a good support for the official homematic versions.
Ok, I will prepare a PR for your repo.
I just reopen this issue ticket here because now after @angelnu having finally implemented a first development version of a working raspberrymatic docker implementation (see #1056) we certainly need a bunch of willing testers once the docker images are built and distributed via a public container registry. So please make yourself ready for testing the docker stuff thoroughly. As soon as first builds are done I will announce them here for testing purposes and would expect some of you guys to give some feedback!
I am ready to test :)
Ok, the party starts!
See here for some basic information/documentation on how to get the snapshot image of the RaspberryMatic docker installed and running:
https://github.com/jens-maus/RaspberryMatic/wiki/Docker
Please note, that this is of course WIP and that we will probably change/adapt e.g. the deploy.sh
as much as required. In addition, the whole docker layout may still change and potentially get incompatible at some point. So please share this information carefully and be prepared to start over again until the final release version of the docker image is out.
However, I do appreciate any feedback in here or on other places because this docker image was mostly tested by only two people so far. So please give some feedback and potentially also raise your requests on changes as early as possible so that we can adapt/fix the docker version of RasberryMatic ASAP.
And again, let me thank @angelnu as much as possible for getting the docker party started. And anyone having contributed to the docker bounty (https://www.bountysource.com/issues/88798894-raspberrymatic-docker-support-dockerhub-repository) should be prepared to pay out @angelnu for his great contribution!
@jens-maus is it possible to get it on the docker-hub? I want to test it on my Synology Docker, and the installation via GUI is only possible when using the hub. :)
@nicx Can't you add the GitHub Container Registry (ghcr.io) as an additional container registry? But in the end using the RaspberryMatic docker on a closed source NAS system might be actually a problem because you need to install and compile the raw uart kernel modules on your own within the host, which of course is hard for such a closed source NAS.
@nicx - you might want to check https://www.reddit.com/r/docker/comments/b8z8jh/docker_pull_from_private_repo_on_synology_nas/
yeah already tried to use the URL but with no luck. So I have to wait for the public availability on the docker hub. Perhaps I will try to get it manually working on my NAS. By the way: Why dont you hüjust make the kernel modules optional? As far as I understood they are only needed when using directly attached BIDCOS hardware. In mist cases when using a docker container I would expect there is a LAN Gateway used, too. This is the case in my case ;)
Hey, good work guys. I got it running and was able to access the WebUI. Unfortunately my HM-MOD-RPI-PCB was not recognized. I used the manual installation since the deploy.sh threw an error. I used the privileged flag.
Guys, can we please concentrate on reporting issues with detailed information so that we can solve it? This is nit a support fora here, so please, if you run into an error (e.g. with the deploy script) please show it so that we can fix the error and not just tell you workarounds?!?
Why dont you hüjust make the kernel modules optional?
It could be indeed done if there is enough use cases to use gateways-only. But IMO we should first try to stable the current set first before adding more flows.
I got it running and was able to access the WebUI. Unfortunately my HM-MOD-RPI-PCB
Please post the output of the deploy script. Btw: I just found a typo at then end of the module install (when I reload the service) that results in the first try of the script failing. If you execute the script a second time it works. I will fix that in the next minutes.
Why dont you hüjust make the kernel modules optional?
It could be indeed done if there is enough use cases to use gateways-only. But IMO we should first try to stable the current set first before adding more flows.
And let me add: Just using LAN-based gateways is only possible for the older and partly obsolete BidCos-RF/homematic protocol. As soon as homematicIP devices are used you always require a RPI-RF-MOD RF module, even when a HmIP-HAP LAN-Gateway is used a real GPIO or USB connected RPI-RF-MOD rf module is required.
Please post the output of the deploy script.
Sorry, I can't. I was toying around with my setup and must have fried my micro sd card. I'll first have to do the setup again. Then I can try.
Hi Jens,
thank you and @angelnu for your work on the Docker implementation.
I have a x86 server running Proxmox. My various smart home services are running in a Debian 10 VM and RaspberryMatic used to run in a separate VM. I use a HB-RF-USB forwarded to the VM via vendor:device assignment.
So I gave the container a try, but ran the commands from deploy.sh manually. Why is kernel.sched_rt_runtime_us=-1 needed? I have skipped this for now and it seems to run fine regardless.
As I use docker-compose, I start the container like this:
services:
ccu:
image: ghcr.io/jens-maus/raspberrymatic:snapshot
container_name: ccu
hostname: ccu
privileged: true
volumes:
- /opt/ccu:/usr/local
ports:
- "8082:80"
- "2001:2001"
- "2010:2010"
- "9292:9292"
restart: unless-stopped
and noticed the following things:
$ docker exec -it ccu /bin/df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 519576044 52683568 445516692 11% /
tmpfs 65536 0 65536 0% /dev
tmpfs 11307820 0 11307820 0% /sys/fs/cgroup
shm 65536 0 65536 0% /dev/shm
/dev/sda2 519576044 52683568 445516692 11% /usr/local
/dev/sda2 519576044 52683568 445516692 11% /etc/resolv.conf
/dev/sda2 519576044 52683568 445516692 11% /etc/hostname
/dev/sda2 519576044 52683568 445516692 11% /etc/hosts
devtmpfs 11292936 0 11292936 0% /dev_host
tmpfs 11307820 116 11307704 0% /tmp
tmpfs 11307820 540 11307280 0% /run
tmpfs 11307824 2772 11305052 0% /var
tmpfs 11307820 0 11307820 0% /media
/dev/sdb1 7751366384 7744295572 0 100% /media/usb1
/dev/sdd1 7751366384 5974434472 1386214252 81% /media/usb2
/dev/sdf1 7751366384 5450277084 1910371640 74% /media/usb3
/dev/sde1 7751366384 5455481792 1905166932 74% /media/usb4
/dev/sdc1 7751366384 5451010292 1909638432 74% /media/usb5
/dev/sda2 519576044 52683568 445516692 11% /media/usb6
/dev/sda1 523248 5224 518024 1% /media/usb7
So I gave the container a try, but ran the commands from deploy.sh manually. Why is kernel.sched_rt_runtime_us=-1 needed? I have skipped this for now and it seems to run fine regardless.
This is also something I have previously asked @angelnu why he has added this sysctl call to the deploy.sh
because I also have the feeling that this might potentially interfere too much with the host system.
As I use docker-compose, I start the container like this:
services: ccu: image: ghcr.io/jens-maus/raspberrymatic:snapshot container_name: ccu hostname: ccu privileged: true volumes: - /opt/ccu:/usr/local ports: - "8082:80" - "2001:2001" - "2010:2010" - "9292:9292" restart: unless-stopped
Can you please go into more detail here or (even better) document your stuff in the respective wiki documentation pages so that others wanting to use docker-compose can benefit from your finding? That would be great.
- deploy.sh and the Wiki entry use the tag "latest", but only "snapshot" exists
This should be already changed in the documentation so that CCU_OCI_TAG="snapshot"
is set before execution of the deploy.sh
and thus the snapshot should be used.
- Sometimes when stopping the container with "docker stop", I get an alert after the next start, that there was an unclean shutdown.
- After one restart, I got the alert "usb1 (/media/usb1) running low on disk space", then I noticed that it mounts all my hard drives in the container, can this be avoided?:
$ docker exec -it ccu /bin/df [...] /dev/sdb1 7751366384 7744295572 0 100% /media/usb1 /dev/sdd1 7751366384 5974434472 1386214252 81% /media/usb2 /dev/sdf1 7751366384 5450277084 1910371640 74% /media/usb3 /dev/sde1 7751366384 5455481792 1905166932 74% /media/usb4 /dev/sdc1 7751366384 5451010292 1909638432 74% /media/usb5 /dev/sda2 519576044 52683568 445516692 11% /media/usb6 /dev/sda1 523248 5224 518024 1% /media/usb7
Thanks for these things, I will have a look how I can solve these. The last point (usb mounting) could be easily solved in disabling the usb mounting alltogether. Question remains if there might be a reason to keep this usb automounting for the docker platform?!?
- "docker logs ccu" is spammed with "Please press Enter to activate this console." after startup
This should already be solved by having disabled the login console. So the next snapshot should then not exhibit this behaviour anymore.
I have now tested the manual installation via docker pull ghcr.io/jens-maus/raspberrymatic:snapshot
via SSH on my Synology (x86), and what can I say: It works flawlessly! The dependency to the kernel modules is not noticeable ;)
After starting the container, I have imported a backup of my previous environment from Debmatic. Flawless. All 3 LAN gateways are working. Groups, devices, programs, everything works without errors.
I have therefore "risked" to switch completely to the Docker solution right away, so that I can provide even better feedback here.
So far I only notice 2 things:
after restarting the container, there is a "Watchdog Alert" in the GUI that needs to be manually confirmed, but otherwise doesn't seem to have any effect at all.
my Home Assistant environment can no longer access the CCU system variables via XMLRPC. The ccs firewall is off, the security level is set to the lowest possible.
Error in HA:
2021-01-07 11:00:48 ERROR (SyncWorker_43) [pyhomematic._hm] RPCFunctions.jsonRpcPost: Exception: HTTP Error 405: Not Allowed
2021-01-07 11:00:48 WARNING (SyncWorker_43) [pyhomematic._hm] ServerThread.jsonRpcLogin: Unable to open session.
Can this be related to this error message on the CCU?
Jan 7 11:00:38 ccu user.err rfd: XmlRpc fault calling system.listMethods({"homeassistant-rf"}) on http://192.168.0.1:35395/RPC2:[faultCode:1,faultString:"<class 'TypeError'>:system_listMethods() takes 1 positional argument but 2 were given"]
Otherwise: Great work! Thanks to you!
After starting the container, I have imported a backup of my previous environment from Debmatic. Flawless. All 3 LAN gateways are working. Groups, devices, programs, everything works without errors.
This is great news and sounds very promising that we can target a public release of the docker environment with the other platforms at the end of the month.
So far I only notice 2 things:
- after restarting the container, there is a "Watchdog Alert" in the GUI that needs to be manually confirmed, but otherwise doesn't seem to have any effect at all.
What is the exact message/description of that Watchdog Alert? Please show so that I can potentially tune the docker container once more to get rid of it.
- my Home Assistant environment can no longer access the CCU system variables via XMLRPC. The ccs firewall is off, the security level is set to the lowest possible.
Are you sure you have exposed the necessary XMLRPC ports when starting the docker? So please show the exact command that you are using to start the raspberrymatic docker.
I just made the upgrade from https://github.com/angelnu/docker-ccu and the HomeMatic-part worked without a problem, thanks, great work!
However, I am using https://github.com/thkl/homebridge-homematic to make the devices available in HomeKit and this uses the Tcl Rega script port 8181
. Therefore I modified the command:
sudo CCU_OCI_TAG="snapshot" CCU_PORTS_TO_OPEN="2001 2010 8181" ./deploy.sh
and my HomeMatic devices did not respond. Then I realized the wiki (https://github.com/jens-maus/RaspberryMatic/wiki/Docker#additional-deploy-settings) differs from the acutal deploy.sh
: https://github.com/jens-maus/RaspberryMatic/blob/master/buildroot-external/board/oci/deploy.sh#L23
The correct command is
sudo CCU_OCI_TAG="snapshot" CCU_RFD_PORTS_TO_OPEN="2001 2010 8181" ./deploy.sh
I think this may also help @nicx
Please update the wiki or the code so they match each other.
Furthermore we could also add a setting for this port like I did a few years back for docker-ccu: https://github.com/angelnu/docker-ccu/pull/15. I can submit a PR if you want.
Please update the wiki or the code so they match each other.
Furthermore we could also add a setting for this port like I did a few years back for docker-ccu: angelnu/docker-ccu#15. I can submit a PR if you want.
Thanks for the feedback. I modified the deploy.sh
script accordingly and added the port 8181 to the default expose port list for the time being. Note, however, that this still might change in the final version.
unfortunately exposing the port 8181 did not help in my case, Home Assistant still cannot read the CCU variables. Currently I am exposing ports 2000,2001,2010,9292 and now 8181, too. Any more ports needed for this?
ok I think this problem is not related to the docker ccs itself, but to the Home Assistant integration or pyhomematic, because I am not alone ;)
So I gave the container a try, but ran the commands from deploy.sh manually. Why is kernel.sched_rt_runtime_us=-1 needed? I have skipped this for now and it seems to run fine regardless.
This is also something I have previously asked @angelnu why he has added this sysctl call to the deploy.sh because I also have the feeling that this might potentially interfere too much with the host system.
This helped in the past with the rfd daemon but I honestly do not remember why. So I would suggest to remove it. I will modify the deploy script.
unfortunately exposing the port 8181 did not help in my case, Home Assistant still cannot read the CCU variables. Currently I am exposing ports 2000,2001,2010,9292 and now 8181, too.
I also run with home assistant and it works for me but I run on the default ports.
@jens-maus - I think it makes sense to open 2010, 9292 and 8181 by default - users can still block with the firewall in Raspberrymatic.
Sometimes when stopping the container with "docker stop", I get an alert after the next start, that there was an unclean shutdown.
I also get the warning... Docker does a kill 1 and I see the init in the container reacting and starts shutting down as it should. So either it does not get enough time to complete before docker sends the kill -9 or Raspberrymatic does not like being stopped from the outside.
Would need to check when the "running" flag is removed by Raspberrymatic during the shutdown...
So far I only notice 2 things:
- after restarting the container, there is a "Watchdog Alert" in the GUI that needs to be manually confirmed, but otherwise doesn't seem to have any effect at all.
What is the exact message/description of that Watchdog Alert? Please show so that I can potentially tune the docker container once more to get rid of it.
unfortunately I didn't get it reproduced so far, the docker container is running, reboots are working. I will come back to you as soon the watchdog error reappears with more details ;)
@nicx I also couldn't reproduce it and the clean shutdown watchdog seem to work fine. However, perhaps you shutdown the docker the hard way without waiting it to shutdown cleanly. In that case the WatchDog is of course right to complain about a non clean shutdown.
just a short update: the docker container is still running without problems... also in my combination with Home Assistant. 👍
One weird side effect: When restarting the ccu container my Synology recognizes a reconnection to my USV. Is there anything inside the container which could affect my Synology to do that? The NUT config is set to "MODE=none".
just a short update: the docker container is still running without problems... also in my combination with Home Assistant. 👍
Thanks for the feedback. Good to know that the container works so far. Please continue your testing and please keep the container updates to the latest snapshot version because we are improving it on a daily basis currently :)
One weird side effect: When restarting the ccu container my Synology recognizes a reconnection to my USV. Is there anything inside the container which could affect my Synology to do that? The NUT config is set to "MODE=none".
This can only be potentially caused by the udev daemon startup which might try to generate some devices. But I am not sure, actually. Please give more details and show logs, screenshots, etc.
I can also confirm that it is running without problems. Every night at 2 AM I pull all docker images and recreate containers which have an updated image. So it pulled the latest snapshot (3.55.5.20210107-e71dfa5), recreated the container and then also restarted my Home Assistant container, because I added the ccu container as a dependency for the Home Assistant container. Everything started up successfully and Home Assistant reconnected to the CCU container automatically.
docker logs is also looking good now, so no more "Please press Enter to activate this console." spam. @nicx: Have you also pulled the latest snapshot yet? From the docker logs it looks like that Jens deactivated NUT:
$ docker logs ccu
Starting watchdog...
Identifying onboard hardware: oci, OK
Initializing RTC Clock: onboard, OK
Running sysctl: OK
Checking for Factory Reset: not required
Checking for Backup Restore: not required
Initializing System: OK
Starting logging: OK
Populating /dev using udev: done
modprobe: can't change directory to '/lib/modules': No such file or directory
modprobe: can't change directory to '/lib/modules': No such file or directory
Identifying Homematic RF-Hardware: .....HmRF: RPI-RF-MOD/HB-RF-USB@usb-0000:03:00.0-12.4, HmIP: RPI-RF-MOD/HB-RF-USB@usb-0000:03:00.0-12.4, OK
Updating Homematic RF-Hardware: RPI-RF-MOD: 4.2.6, OK
Starting irqbalance: OK
Starting network: eth0: link up, fixed, firewall, inet up, 172.18.0.2, OK
Preparing start of hs485d: no Hm-Wired hardware found
Starting xinetd: OK
Starting eq3configd: OK
Starting lighttpd: OK
Starting ser2net: no configuration file
Starting ssdpd: OK
Starting sshd: OK
Starting NUT services: disabled
Initializing Third-Party Addons: OK
Starting LGWFirmwareUpdate: ...OK
Setting LAN Gateway keys: OK
Starting hs485d: no Hm-Wired hardware found
Starting multimacd: .OK
Starting rfd: .OK
Starting HMIPServer: ........OK
Starting ReGaHss: .OK
Starting CloudMatic: OK
Starting NeoServer: [Fri Jan 8 02:02:02 CET 2021] /usr/local/etc/config/rc.d/97NeoServer start
[Fri Jan 8 02:02:02 CET 2021] /usr/local/etc/config/rc.d/97NeoServer starting neo_server ...
[Fri Jan 8 02:02:03 CET 2021] /usr/local/etc/config/rc.d/97NeoServer neo_server started (pid=1156)
OK
Starting Third-Party Addons: OK
Starting crond: OK
Setup onboard LEDs: booted, OK
@jens-maus Maybe you could also deactivate the SSH daemon. Seems to be a bit redundant. when you can always enter a container with "docker exec -it
Regarding the exposing of ports: If only other containers on the same machine are communicating with the RaspberryMatic container, then there is no need for the -p flags to expose ports to the host. Containers can reference each other by their container name and reach all ports directly. It's generally best practice to keep as much in the container as possible, so that the host remains safe should the service inside the container be attacked.
What do you think about splitting the deploy.sh, so that changes necessary for the host are separated from the setup of the Docker container? When using docker-compose or even other interfaces to create containers, the part starting at https://github.com/jens-maus/RaspberryMatic/blob/9234725f4d1c4b8346946e905d923cfcd80643a1/buildroot-external/board/oci/deploy.sh#L103 may interfere with that process. That's why I ran the parts for the host manually and did not execute deploy.sh.
@hanzoh
I can also confirm that it is running without problems.
Thanks for the feedback.
@nicx: Have you also pulled the latest snapshot yet? From the docker logs it looks like that Jens deactivated NUT:
Nope, I didn't deactive NUT explicitly. The startup of NUT is dependent on the user placing the right config files under /etc/config config space. So you haven't configured NUT, thus it is not enabled - that's why it outputs 'disabled'.
@jens-maus Maybe you could also deactivate the SSH daemon. Seems to be a bit redundant. when you can always enter a container with "docker exec -it
/bin/sh" from the machine that runs it.
You can disable the SSH daemon startup yourself. Just open the WebUI and disable SSH under "Settings -> Security" and thus the SSH daemon should not be started anymore upon restart.
Regarding the exposing of ports: If only other containers on the same machine are communicating with the RaspberryMatic container, then there is no need for the -p flags to expose ports to the host. Containers can reference each other by their container name and reach all ports directly. It's generally best practice to keep as much in the container as possible, so that the host remains safe should the service inside the container be attacked.
That's clear, but that is yet another user config/setting option. We cannot know how the user wants to use the raspberrymatic docker and we expect ordinary users here, so we suggest to use the -p option in the documentation so that the docker behaves essentially the same like a real raspberrymatic machine. Advanced users can of course tune their expose host settings to their likings.
What do you think about splitting the deploy.sh, so that changes necessary for the host are separated from the setup of the Docker container?
I also thought about that, indeed. Thus, having a dedicated install script just installing the necessary dependency and then having a dedicated startup script which just starts all dependencies and then starts the docker. And my mid-term plan is to potentially try also to generate full fledged debian packages around these things so that ordinary users on a debian machine can install and use the raspberrymatic docker also with normal apt install
commands and the service descriptions then kicking in to ensure that the docker and dependencies are installed and started correctly.
When using docker-compose or even other interfaces to create containers, the part starting at
may interfere with that process. That's why I ran the parts for the host manually and did not execute deploy.sh.
Then please suggest how to replace it.
just a short update: the docker container is still running without problems... also in my combination with Home Assistant. 👍
Thanks for the feedback. Good to know that the container works so far. Please continue your testing and please keep the container updates to the latest snapshot version because we are improving it on a daily basis currently :)
I am automatically updating my containers (including CCU) with watchtower, last night there where 2 updates of the CCU container successful.
One weird side effect: When restarting the ccu container my Synology recognizes a reconnection to my USV. Is there anything inside the container which could affect my Synology to do that? The NUT config is set to "MODE=none".
This can only be potentially caused by the udev daemon startup which might try to generate some devices. But I am not sure, actually. Please give more details and show logs, screenshots, etc.
hm I really don't know where to start, so here are some logs, screenshots, etc:
2021-01-08 11:07:32 | stdout | Setup onboard LEDs: booted, OK
-- | -- | --
2021-01-08 11:07:32 | stdout | Starting crond: OK
2021-01-08 11:07:32 | stdout | Starting Third-Party Addons: OK
2021-01-08 11:07:32 | stdout | Starting NeoServer: disabled
2021-01-08 11:07:32 | stdout | Starting CloudMatic: OK
2021-01-08 11:07:32 | stdout | Starting ReGaHss: .OK
2021-01-08 11:07:30 | stdout | Starting HMIPServer: ...OK
2021-01-08 11:07:26 | stdout | Starting rfd: ..OK
2021-01-08 11:07:22 | stdout | Starting multimacd: not required
2021-01-08 11:07:22 | stdout | Starting hs485d: no Hm-Wired hardware found
2021-01-08 11:07:22 | stdout | Setting LAN Gateway keys: OK
2021-01-08 11:07:22 | stdout | Starting LGWFirmwareUpdate: ...OK
2021-01-08 11:07:22 | stdout | Initializing Third-Party Addons: OK
2021-01-08 11:07:22 | stdout | Starting NUT services: disabled
2021-01-08 11:07:22 | stdout | Starting ssdpd: OK
2021-01-08 11:07:22 | stdout | Starting ser2net: no configuration file
2021-01-08 11:07:22 | stdout | Starting lighttpd: OK
2021-01-08 11:07:22 | stdout | Starting eq3configd: OK
2021-01-08 11:07:22 | stdout | Starting xinetd: OK
2021-01-08 11:07:22 | stdout | Preparing start of hs485d: no Hm-Wired hardware found
2021-01-08 11:07:22 | stdout | Starting network: eth0: link up, fixed, firewall, inet up, 172.17.0.7, OK
2021-01-08 11:07:22 | stdout | Starting irqbalance: OK
2021-01-08 11:07:22 | stdout | Updating Homematic RF-Hardware: no GPIO/USB connected RF-hardware found
2021-01-08 11:07:22 | stdout | Identifying Homematic RF-Hardware: HmRF: none, HmIP: none, OK
2021-01-08 11:07:20 | stdout | done
2021-01-08 11:07:20 | stdout | Populating /dev using udev: udevadm settle failed
2021-01-08 11:06:50 | stdout | Starting logging: OK
2021-01-08 11:06:50 | stdout | Initializing System: OK
2021-01-08 11:06:50 | stdout | Checking for Backup Restore: not required
2021-01-08 11:06:50 | stdout | Checking for Factory Reset: not required
2021-01-08 11:06:50 | stdout | Running sysctl: OK
2021-01-08 11:06:50 | stdout | Initializing RTC Clock: onboard, OK
2021-01-08 11:06:50 | stdout | Identifying onboard hardware: oci, OK
***** messages *****
Jan 8 12:06:50 ccu syslog.info syslogd started: BusyBox v1.32.0
Jan 8 12:06:50 ccu user.info usbmount[159]: /dev/sdq does not contain a filesystem or disklabel
Jan 8 12:06:50 ccu user.info usbmount[170]: /dev/sda does not contain a filesystem or disklabel
Jan 8 12:06:50 ccu user.info usbmount[197]: /dev/sdq2 does not contain a filesystem or disklabel
Jan 8 12:06:50 ccu user.info usbmount[218]: /dev/sda1 does not contain a filesystem or disklabel
Jan 8 12:06:51 ccu user.info usbmount[240]: /dev/sdg does not contain a filesystem or disklabel
Jan 8 12:06:51 ccu user.info usbmount[250]: /dev/sdg5 does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[175]: /dev/sdc does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[271]: /dev/sdc2 does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[181]: /dev/sde does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[297]: /dev/sde2 does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[172]: /dev/sdb does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[324]: /dev/sdb1 does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[336]: /dev/sdb3 does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[345]: /dev/sdb5 does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[189]: /dev/sdf does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[359]: /dev/sdf1 does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[372]: /dev/sdf3 does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[381]: /dev/sdf5 does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[201]: /dev/sdq3 does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[196]: /dev/sdq1 does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[220]: /dev/sda2 does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[221]: /dev/sda3 does not contain a filesystem or disklabel
Jan 8 12:06:55 ccu user.info usbmount[219]: /dev/sda5 does not contain a filesystem or disklabel
Jan 8 12:06:56 ccu user.info usbmount[249]: /dev/sdg1 does not contain a filesystem or disklabel
Jan 8 12:07:00 ccu user.info usbmount[274]: /dev/sdc3 does not contain a filesystem or disklabel
Jan 8 12:07:00 ccu user.info usbmount[298]: /dev/sde3 does not contain a filesystem or disklabel
Jan 8 12:07:00 ccu user.info usbmount[323]: /dev/sdb2 does not contain a filesystem or disklabel
Jan 8 12:07:00 ccu user.info usbmount[360]: /dev/sdf2 does not contain a filesystem or disklabel
Jan 8 12:07:06 ccu user.info usbmount[251]: /dev/sdg2 does not contain a filesystem or disklabel
Jan 8 12:07:10 ccu user.info usbmount[273]: /dev/sdc1 does not contain a filesystem or disklabel
Jan 8 12:07:10 ccu user.info usbmount[299]: /dev/sde5 does not contain a filesystem or disklabel
Jan 8 12:07:22 ccu user.info firewall: configuration set
Jan 8 12:07:22 ccu daemon.err xinetd[607]: Unable to read included directory: /etc/config/xinetd.d [file=/etc/xinetd.conf] [line=14]
Jan 8 12:07:22 ccu daemon.crit xinetd[607]: 607 {init_services} no services. Exiting...
Jan 8 12:07:22 ccu user.info root: Updating RF Lan Gateway Coprocessor Firmware
Jan 8 12:07:22 ccu user.debug update-coprocessor: firmware filename is: coprocessor_update_hm_only.eq3
Jan 8 12:07:22 ccu user.info root: Updating RF Lan Gateway Firmware
Jan 8 12:07:22 ccu user.info update-lgw-firmware: No gateway found in config file /etc/config/rfd.conf
Jan 8 12:07:25 ccu user.info usbmount[272]: /dev/sdc5 does not contain a filesystem or disklabel
Jan 8 12:07:25 ccu user.info usbmount[300]: /dev/sde1 does not contain a filesystem or disklabel
Jan 8 12:07:29 ccu user.err rfd: HSSParameter::GetValue() id=DECISION_VALUE failed getting physical value.
Jan 8 12:07:32 ccu daemon.info : starting pid 872, tty '/dev/null': '/usr/bin/monit -Ic /etc/monitrc'
Jan 8 12:07:32 ccu user.info monit[872]: Starting Monit 5.27.1 daemon with http interface at /var/run/monit.sock
Jan 8 12:07:32 ccu user.info monit[872]: 'ccu' Monit 5.27.1 started
Jan 8 12:07:38 ccu user.err rfd: HSSParameter::GetValue() id=ENERGY_COUNTER failed getting physical value.
Jan 8 12:07:38 ccu local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0026179:1","ENERGY_COUNTER"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan 8 12:07:38 ccu local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0026179:1","ENERGY_COUNTER"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan 8 12:07:38 ccu local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0.000000 [ReadValue():iseDOMdpHSS.cpp:124]
Jan 8 12:07:38 ccu user.err rfd: HSSParameter::GetValue() id=ENERGY_COUNTER failed getting physical value.
Jan 8 12:07:38 ccu local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0026163:1","ENERGY_COUNTER"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan 8 12:07:38 ccu local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0026163:1","ENERGY_COUNTER"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan 8 12:07:38 ccu local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0.000000 [ReadValue():iseDOMdpHSS.cpp:124]
Jan 8 12:07:38 ccu user.err rfd: HSSParameter::GetValue() id=GAS_ENERGY_COUNTER failed getting physical value.
Jan 8 12:07:38 ccu local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0026179:1","GAS_ENERGY_COUNTER"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan 8 12:07:38 ccu local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0026179:1","GAS_ENERGY_COUNTER"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan 8 12:07:38 ccu local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0.000000 [ReadValue():iseDOMdpHSS.cpp:124]
Jan 8 12:07:38 ccu user.err rfd: HSSParameter::GetValue() id=BOOT failed getting physical value.
Jan 8 12:07:38 ccu local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0026179:1","BOOT"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan 8 12:07:38 ccu local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0026179:1","BOOT"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan 8 12:07:38 ccu local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0 [ReadValue():iseDOMdpHSS.cpp:124]
Jan 8 12:07:38 ccu user.err rfd: HSSParameter::GetValue() id=GAS_ENERGY_COUNTER failed getting physical value.
Jan 8 12:07:38 ccu local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0026163:1","GAS_ENERGY_COUNTER"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan 8 12:07:38 ccu user.err rfd: HSSParameter::GetValue() id=BOOT failed getting physical value.
Jan 8 12:07:38 ccu local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0026163:1","GAS_ENERGY_COUNTER"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan 8 12:07:38 ccu local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0.000000 [ReadValue():iseDOMdpHSS.cpp:124]
Jan 8 12:07:38 ccu local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"MEQ0026163:1","BOOT"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan 8 12:07:38 ccu local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"MEQ0026163:1","BOOT"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan 8 12:07:38 ccu local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0 [ReadValue():iseDOMdpHSS.cpp:124]
Jan 8 12:07:38 ccu user.err rfd: HSSParameter::GetValue() id=RAIN_COUNTER failed getting physical value.
Jan 8 12:07:38 ccu local0.warn ReGaHss: WARNING: XMLRPC 'getValue': rpcClient.isFault() failed (url: xmlrpc_bin://127.0.0.1:32001, params: {"OEQ2113454:1","RAIN_COUNTER"}, result: [faultCode:-1,faultString:"Failure"]) [CallXmlrpcMethod():iseXmlRpc.cpp:2608]
Jan 8 12:07:38 ccu local0.err ReGaHss: ERROR: XMLRPC 'getValue' call failed (interface: 1007, params: {"OEQ2113454:1","RAIN_COUNTER"}) [CallGetValue():iseXmlRpc.cpp:1435]
Jan 8 12:07:38 ccu local0.err ReGaHss: ERROR: CallGetValue failed; sVal = 0.000000 [ReadValue():iseDOMdpHSS.cpp:124]
Jan 8 12:07:45 ccu user.err monit[872]: 'sshdEnabled' status failed (1) -- no output
Jan 8 12:07:45 ccu user.err monit[872]: 'hs485dEnabled' status failed (1) -- no output
Jan 8 12:07:45 ccu user.err monit[872]: 'multimacdEnabled' status failed (1) -- no output
Jan 8 12:07:45 ccu user.err monit[872]: 'hmlangwEnabled' status failed (1) -- no output
Jan 8 12:07:45 ccu user.warn monit[872]: 'hasUSB' status failed (1) -- no output
Jan 8 12:07:45 ccu user.err monit[872]: 'uncleanShutdownCheck' status failed (0) -- no output
Jan 8 12:07:45 ccu user.info monit[872]: 'uncleanShutdownCheck' exec: '/bin/sh -c /bin/triggerAlarm.tcl 'Unclean shutdown or system crash identified' WatchDog-Alarm ; rm -f /var/status/uncleanShutdown'
Jan 8 12:07:45 ccu user.err monit[872]: Lookup for '/media/usb1' filesystem failed -- not found in /proc/self/mounts
Jan 8 12:07:45 ccu user.err monit[872]: Filesystem '/media/usb1' not mounted
Jan 8 12:07:45 ccu user.err monit[872]: 'usb1' unable to read filesystem '/media/usb1' state
Jan 8 12:07:45 ccu user.info monit[872]: 'usb1' trying to restart
Jan 8 12:08:00 ccu user.warn monit[872]: 'hasUSB' status failed (1) -- no output
Jan 8 12:08:00 ccu user.info monit[872]: 'uncleanShutdownCheck' status succeeded (1) -- no output
Jan 8 12:08:00 ccu user.err monit[872]: Filesystem '/media/usb1' not mounted
Jan 8 12:08:00 ccu user.err monit[872]: 'usb1' unable to read filesystem '/media/usb1' state
Jan 8 12:08:00 ccu user.info monit[872]: 'usb1' trying to restart
Jan 8 12:08:15 ccu user.warn monit[872]: 'hasUSB' status failed (1) -- no output
Jan 8 12:08:15 ccu user.err monit[872]: Filesystem '/media/usb1' not mounted
Jan 8 12:08:15 ccu user.err monit[872]: 'usb1' unable to read filesystem '/media/usb1' state
Jan 8 12:08:15 ccu user.info monit[872]: 'usb1' trying to restart
Jan 8 12:08:31 ccu user.warn monit[872]: 'hasUSB' status failed (1) -- no output
Jan 8 12:08:31 ccu user.err monit[872]: Filesystem '/media/usb1' not mounted
Jan 8 12:08:31 ccu user.err monit[872]: 'usb1' unable to read filesystem '/media/usb1' state
Jan 8 12:08:31 ccu user.info monit[872]: 'usb1' trying to restart
Jan 8 12:08:46 ccu user.err monit[872]: 'hasUSB' status failed (1) -- no output
Jan 8 12:10:00 ccu user.debug script: [DutyCycle 1] c DutyCycle mit HM Script und system.exec v 1.0 by Alchy
Jan 8 12:15:00 ccu user.err rfd: XmlRpc fault calling system.listMethods({"homeassistant-rf"}) on http://192.168.0.1:46739/RPC2:[faultCode:1,faultString:"<class 'TypeError'>:system_listMethods() takes 1 positional argument but 2 were given"]
Jan 8 12:15:00 ccu user.debug script: [DutyCycle 1] c DutyCycle mit HM Script und system.exec v 1.0 by Alchy
Jan 8 12:17:04 ccu user.err rfd: XmlRpc fault calling system.listMethods({"homeassistant-rf"}) on http://192.168.0.1:46739/RPC2:[faultCode:1,faultString:"<class 'TypeError'>:system_listMethods() takes 1 positional argument but 2 were given"]
Jan 8 12:20:00 ccu user.debug script: [DutyCycle 1] c DutyCycle mit HM Script und system.exec v 1.0 by Alchy
Jan 8 12:25:00 ccu user.debug script: [DutyCycle 1] c DutyCycle mit HM Script und system.exec v 1.0 by Alchy
Jan 8 12:30:00 ccu user.debug script: [DutyCycle 1] c DutyCycle mit HM Script und system.exec v 1.0 by Alchy
Jan 8 12:35:00 ccu user.debug script: [DutyCycle 1] c DutyCycle mit HM Script und system.exec v 1.0 by Alchy
Jan 8 12:40:00 ccu user.debug script: [DutyCycle 1] c DutyCycle mit HM Script und system.exec v 1.0 by Alchy
Jan 8 12:45:00 ccu user.debug script: [DutyCycle 1] c DutyCycle mit HM Script und system.exec v 1.0 by Alchy
***** hmserver.log *****
Jan 8 12:07:27 de.eq3.lib.util.dynamics.GenericFactory INFO [main] @GenericFactory
Jan 8 12:07:27 de.eq3.lib.util.dynamics.GenericFactory INFO [main] created instance of HMServerConfiguration with parameter(s)
Jan 8 12:07:27 de.eq3.lib.util.dynamics.GenericFactory INFO [main] passed 1 parameter(s), in declarative order [String]
Jan 8 12:07:27 de.eq3.ccu.server.BaseHMServer INFO [main] Creating instance of HMServer...
Jan 8 12:07:27 de.eq3.ccu.server.BaseHMServer INFO [main] Default MaxEventLoopExecuteTime: 2000000000
Jan 8 12:07:27 de.eq3.ccu.server.BaseHMServer INFO [main] Default BlockedThreadCheckInterval: 1000
Jan 8 12:07:27 de.eq3.ccu.server.BaseHMServer INFO [main] Default MaxWorkerExecuteTime: 60000000000
Jan 8 12:07:27 de.eq3.ccu.server.BaseHMServer INFO [main] Default EventLoopPoolSize: 2
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [main] SYSTEM: added for deployment [BackendWorker] (1) *worker
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [main] SYSTEM: added for deployment [GroupRequestWorker] (1) *worker
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [main] SYSTEM: added for deployment [DiagramRequestWorker] (1) *worker
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [main] SYSTEM: added for deployment [StorageRequestWorker] (1) *worker
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [main] SYSTEM: added for deployment [DeviceFirmwareRequestWorker] (1) *worker
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [main] SYSTEM: added for deployment [EnergyPriceRequestWorker] (1) *worker
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [main] SYSTEM: added for deployment [CouplingRequestWorker] (1) *worker
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [main] SYSTEM: added for deployment [RegaClientWorker] (1) *worker
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [main] SYSTEM: added for deployment [GroupConfigurationPersistenceFileSystem] (1) *worker
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [main] SYSTEM: deploying 9 classes to Vert.x
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [vert.x-eventloop-thread-2] SYSTEM: start of BackendWorker succeeded (29cef303-cfd3-4086-89b5-5a4326f9a5b9)
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [vert.x-eventloop-thread-2] SYSTEM: start of StorageRequestWorker succeeded (e1be449c-bd92-44f0-b42a-c3bcf53a6668)
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [main] SYSTEM: 9 VertxDeployers initialized
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [vert.x-eventloop-thread-1] SYSTEM: start of CouplingRequestWorker succeeded (437c299b-1972-45fd-bd52-5520dc8c8bc7)
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [vert.x-eventloop-thread-5] SYSTEM: start of EnergyPriceRequestWorker succeeded (d22a56eb-9b2d-400c-bb06-1fe359cb4d54)
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [vert.x-eventloop-thread-0] SYSTEM: start of RegaClientWorker succeeded (3b69f079-d51a-4341-8db9-49d84a7734d7)
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [vert.x-eventloop-thread-0] SYSTEM: start of DiagramRequestWorker succeeded (90ea88d7-96b1-41ae-b89b-01948fd0b731)
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [vert.x-eventloop-thread-1] SYSTEM: start of GroupRequestWorker succeeded (6fa2567b-7232-4dce-af9b-3ea818fa260e)
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [vert.x-eventloop-thread-0] SYSTEM: start of DeviceFirmwareRequestWorker succeeded (0476172a-ff6c-4d43-936d-f84670b9e632)
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [vert.x-eventloop-thread-2] SYSTEM: start of GroupConfigurationPersistenceFileSystem succeeded (9de513ce-6731-47c2-a29c-59dbf66130dc)
Jan 8 12:07:27 de.eq3.cbcs.vertx.management.VertxManager INFO [main] SYSTEM: initial deployment complete _____________________________________________________
Jan 8 12:07:27 de.eq3.ccu.server.BaseHMServer INFO [main] Starting HMServer at 127.0.0.1:39292
Jan 8 12:07:27 de.eq3.ccu.server.BaseHMServer INFO [main] Read Configuration
Jan 8 12:07:27 de.eq3.ccu.server.BaseHMServer INFO [main] Create Bidcos Dispatcher
Jan 8 12:07:27 de.eq3.ccu.server.BaseHMServer INFO [main] InitBidCosCache
Jan 8 12:07:27 de.eq3.ccu.server.BaseHMServer INFO [main] Create groupDefinitionProvider
Jan 8 12:07:28 de.eq3.ccu.server.BaseHMServer INFO [main] Create VirtualDeviceHolder
Jan 8 12:07:28 de.eq3.ccu.server.BaseHMServer INFO [main] Create VirtualDeviceHandlerRega
Jan 8 12:07:28 de.eq3.ccu.server.BaseHMServer INFO [main] Create GroupAdministrationService
Jan 8 12:07:28 de.eq3.ccu.server.BaseHMServer INFO [main] Create GroupDeviceDispatcher
Jan 8 12:07:28 de.eq3.ccu.server.BaseHMServer INFO [main] Create GroupDeviceHandler
Jan 8 12:07:28 de.eq3.ccu.groupdevice.service.GroupDeviceHandler INFO [main] @GroupDeviceHandler - initializing...
Jan 8 12:07:28 de.eq3.ccu.groupdevice.service.GroupDeviceHandler INFO [main] --> created groupDeviceDispatcher (GroupDeviceService to BidCoS (via Dispatcher))
Jan 8 12:07:28 de.eq3.ccu.groupdevice.service.GroupDeviceHandler INFO [main] --> created virtualDeviceHandler (GroupDeviceService to ReGa)
Jan 8 12:07:28 de.eq3.ccu.groupdevice.service.GroupDeviceHandler INFO [main] --> got groupDefinitionProvider
Jan 8 12:07:28 de.eq3.ccu.server.BaseHMServer INFO [main] Create BidCosGroupMemberProvider
Jan 8 12:07:28 de.eq3.ccu.server.BaseHMServer INFO [main] Init groupAdministrationService
Jan 8 12:07:28 de.eq3.ccu.server.BaseHMServer INFO [main] Init Virtual OS Device
Jan 8 12:07:28 de.eq3.ccu.server.BaseHMServer INFO [main] Init ESHLight Bridge
Jan 8 12:07:28 de.eq3.ccu.server.BaseHMServer INFO [main] Create RrdDatalogging
Jan 8 12:07:28 de.eq3.ccu.server.BaseHMServer INFO [main] Create MeasurementService
Jan 8 12:07:28 de.eq3.ccu.server.BaseHMServer INFO [main] Init MeasurementService
Jan 8 12:07:28 de.eq3.ccu.server.BaseHMServer INFO [main] Create HTTP Server
Jan 8 12:07:28 de.eq3.ccu.server.BaseHMServer INFO [main] Create BidCos context and start handler
Jan 8 12:07:28 de.eq3.ccu.server.BaseHMServer INFO [main] Create group context and start handler
Jan 8 12:07:29 de.eq3.ccu.server.BaseHMServer INFO [main] Starting HMServer done
Jan 8 12:07:38 de.eq3.ccu.virtualdevice.service.internal.rega.VirtualDeviceHandlerRega INFO [vert.x-eventloop-thread-1] (un)registerCallback on VirtualDeviceHandlerRega called from url: xmlrpc_bin://127.0.0.1:31999
Jan 8 12:07:38 de.eq3.ccu.virtualdevice.service.internal.rega.VirtualDeviceHandlerRega INFO [vert.x-eventloop-thread-1] Added InterfaceId: 1008
Jan 8 12:07:38 de.eq3.ccu.virtualdevice.service.internal.rega.BackendWorker INFO [vert.x-worker-thread-9] Execute BackendCommand: de.eq3.ccu.virtualdevice.service.internal.rega.BackendUpdateDevicesCommand
Jan 8 12:07:38 de.eq3.ccu.virtualdevice.service.internal.rega.BackendUpdateDevicesCommand INFO [vert.x-worker-thread-9] updateDevicesForClient -> 40 device addresses will be added
Jan 8 12:07:48 de.eq3.ccu.virtualdevice.service.internal.rega.BackendUpdateDevicesCommand INFO [vert.x-worker-thread-9] set ready config of INT0000001
Jan 8 12:07:48 de.eq3.ccu.virtualdevice.service.internal.rega.BackendUpdateDevicesCommand INFO [vert.x-worker-thread-9] set ready config of INT0000002
Jan 8 12:07:48 de.eq3.ccu.virtualdevice.service.internal.rega.BackendUpdateDevicesCommand INFO [vert.x-worker-thread-9] set ready config of INT0000003
Jan 8 12:07:48 de.eq3.ccu.virtualdevice.service.internal.rega.BackendUpdateDevicesCommand INFO [vert.x-worker-thread-9] set ready config of INT0000005
Jan 8 12:07:48 de.eq3.ccu.virtualdevice.service.internal.rega.BackendUpdateDevicesCommand INFO [vert.x-worker-thread-9] set ready config of INT0000006
Jan 8 12:07:48 de.eq3.ccu.virtualdevice.service.internal.rega.BackendUpdateDevicesCommand INFO [vert.x-worker-thread-9] set ready config of INT0000007
Jan 8 12:07:48 de.eq3.ccu.virtualdevice.service.internal.rega.BackendUpdateDevicesCommand INFO [vert.x-worker-thread-9] set ready config of INT0000008
Jan 8 12:07:48 de.eq3.ccu.virtualdevice.service.internal.rega.BackendUpdateDevicesCommand INFO [vert.x-worker-thread-9] set ready config of INT0000009
Jan 8 12:07:48 de.eq3.ccu.virtualdevice.service.internal.rega.BackendUpdateDevicesCommand INFO [vert.x-worker-thread-9] set ready config of INT0000010
Jan 8 12:07:48 de.eq3.ccu.virtualdevice.service.internal.rega.BackendUpdateDevicesCommand INFO [vert.x-worker-thread-9] set ready config of INT0000011
Jan 8 12:15:00 de.eq3.ccu.virtualdevice.service.internal.rega.VirtualDeviceHandlerRega INFO [vert.x-eventloop-thread-1] (un)registerCallback on VirtualDeviceHandlerRega called from url: http://192.168.0.1:46739
Jan 8 12:15:00 de.eq3.ccu.virtualdevice.service.internal.rega.VirtualDeviceHandlerRega INFO [vert.x-eventloop-thread-1] Added InterfaceId: homeassistant-groups
Jan 8 12:15:00 de.eq3.ccu.virtualdevice.service.internal.rega.BackendWorker INFO [vert.x-worker-thread-14] Execute BackendCommand: de.eq3.ccu.virtualdevice.service.internal.rega.BackendUpdateDevicesCommand
Jan 8 12:17:04 de.eq3.ccu.virtualdevice.service.internal.rega.VirtualDeviceHandlerRega INFO [vert.x-eventloop-thread-1] (un)registerCallback on VirtualDeviceHandlerRega called from url: http://192.168.0.1:46739
Jan 8 12:17:04 de.eq3.ccu.virtualdevice.service.internal.rega.VirtualDeviceHandlerRega INFO [vert.x-eventloop-thread-1] Added InterfaceId: homeassistant-groups
Jan 8 12:17:04 de.eq3.ccu.virtualdevice.service.internal.rega.BackendWorker INFO [vert.x-worker-thread-10] Execute BackendCommand: de.eq3.ccu.virtualdevice.service.internal.rega.BackendUpdateDevicesCommand
in addition I have the watchdog error back again after a restart of the CCU container:
last logs of the reboot:
2021-01-08 11:07:22 | stdout | Starting network: eth0: link up, fixed, firewall, inet up, 172.17.0.7, OK
-- | -- | --
2021-01-08 11:07:22 | stdout | Starting irqbalance: OK
2021-01-08 11:07:22 | stdout | Updating Homematic RF-Hardware: no GPIO/USB connected RF-hardware found
2021-01-08 11:07:22 | stdout | Identifying Homematic RF-Hardware: HmRF: none, HmIP: none, OK
2021-01-08 11:07:20 | stdout | done
2021-01-08 11:07:20 | stdout | Populating /dev using udev: udevadm settle failed
2021-01-08 11:06:50 | stdout | Starting logging: OK
2021-01-08 11:06:50 | stdout | Initializing System: OK
2021-01-08 11:06:50 | stdout | Checking for Backup Restore: not required
2021-01-08 11:06:50 | stdout | Checking for Factory Reset: not required
2021-01-08 11:06:50 | stdout | Running sysctl: OK
2021-01-08 11:06:50 | stdout | Initializing RTC Clock: onboard, OK
2021-01-08 11:06:50 | stdout | Identifying onboard hardware: oci, OK
2021-01-08 11:06:22 | stdout | Stopping ReGaHss: .
2021-01-08 11:05:55 | stdout | [Fri Jan 8 12:05:55 CET 2021] /usr/local/etc/config/rc.d/97NeoServer neo_server stopped
2021-01-08 11:05:55 | stdout | [Fri Jan 8 12:05:55 CET 2021] /usr/local/etc/config/rc.d/97NeoServer stopping neo server ...
2021-01-08 11:05:55 | stdout | Stopping Third-Party Addons: OK
2021-01-08 11:05:55 | stdout | Stopping crond: OK
2021-01-08 11:05:55 | stdout | Setup onboard LEDs: shutdown, OK
2021-01-08 07:56:22 | stdout | Setup onboard LEDs: booted, OK
2021-01-08 07:56:22 | stdout | Starting crond: OK
2021-01-08 07:56:22 | stdout | Starting Third-Party Addons: OK
2021-01-08 07:56:22 | stdout | Starting NeoServer: disabled
2021-01-08 07:56:22 | stdout | Starting CloudMatic: OK
@nicx - the shutdown does not seem to complete but hand at stopping ReGaHss. Docker´s default timeout is 10s. The timeout can be modified with "--stop-timeout" .
If enough people hit it we can change it in deploy.sh
Good point. I will then adapt it right away to give it more time. E.g. 30s should be hopefully enough, thought. Is there an option to set this default in the Dockerfile container-wise?
Is your feature request related to a problem? Please describe. Ich würde gerne Raspberrymatic mit Phoscon (eine Zigbee Bridge alternative von Dresden Elektronik) parallel auf einem Raspberry Pi nutzen.
Describe the solution you'd like Ideal wäre es, wenn es hierfür eine Docker Integration geben würde, dann lässt sich beides leicht aktualisieren und sie kommen sich nicht in die Quere.
Describe alternatives you've considered Bisher keine. Ich warte einfach auf die Integration bzw. würde für dieses Ticket ein Bounty aufsetzen als Incentive.
Additional context Mir ist bewusst, dass es hierzu schon einige Tickets #192, #248, #357 gegeben hat. Das letzte hat die Integration von Raspberrymatic auf die x86 Plattform bzw. für virtualisierte Umgebungen ermöglicht (siehe auch https://homematic-forum.de/forum/viewtopic.php?f=65&t=54055#p538104). Jedoch ist hierbei auch unter dem Tisch gefallen, dass es noch keine Docker Integration gibt.