Closed dofl closed 1 year ago
Hi, very interesting indeed. Not sure how the process handles incoming data, but maybe @dennissiemensma has some more details about this.
You may want to look into the open files and system calls used by the process.
If the following packages are available in the container, try:
lsof -p 14087
strace -v -f -s 1024 -p 14087
Replacing the 14087
with whatever process ID you're looking into.
The Synology NAS is missing some of the packages, but /proc/
> ls -l /proc/14087/fd
> total 0
> lr-x------ 1 root root 64 Oct 12 16:22 0 -> /dev/null
> l-wx------ 1 root root 64 Oct 12 16:22 1 -> 'pipe:[13751701]'
> l-wx------ 1 root root 64 Oct 12 16:22 2 -> 'pipe:[13751702]'
> l-wx------ 1 root root 64 Oct 12 16:22 3 -> /run/s6/services/dsmr_remote_datalogger/supervise/lock
> lr-x------ 1 root root 64 Oct 12 16:22 4 -> /run/s6/services/dsmr_remote_datalogger/supervise/control
> l-wx------ 1 root root 64 Oct 12 16:22 5 -> /run/s6/services/dsmr_remote_datalogger/supervise/control
> lrwx------ 1 root root 64 Oct 12 16:22 6 -> 'anon_inode:[signalfd]'
It's binding to a pipe that gives no more information, I wasn't able to find any files bound to this pipe. Can anyone tell me if they are seeing the same behavior as well?
I'm not using the containerized version. Did you try installing lsof
and strace
within the container? They might be available for install.
If so, I'm sure it will give you more insight regarding whatever the process is doing.
Yeah, you can add packages to the running container, but I believe dofl is talking about the host system (Synology). Example to install a package in a running container:
docker exec -ti dsmr bash
apk add strace
I'm most possibly facing the same behavior on my nas. However, this is not really an area I`m familiar with. Is this also normal behavior in a non-Docker setup perhaps?
On the Synology host you can maybe run:
sudo synogear install
So strace is maybe available.
I'd start by checking the process in the container. Maybe try iotop
as well.
When I run iotop
, filtering active write processes, it's almost constantly idle. Only postgres seems to write to the disk every once in a while, which makes sense.
I just noticed it's about the remote datalogger, not the datalogger. I`ll investigate this further Tonight. I guess the init process causes the issue. It shouldn't start the remote datalogger if you're not using it (not defined as var).
I believe I've fixed it. Please validate with the new image.
The Docker images uses S6-Overlay to start processes based on defined variables. By default the variable of dsmr remote datalogger isn't set, so it shouldn't start. However, as it's trying to start as a service, S6 doens't see the process running and runs the script over and over again. So, what I found is to add a sleep infinity
step if the variable hasn't been defined. S6 has the idea the service has been started properly, but underwater it doesn't do anything anymore, so also not restarting over and over again.
if [[ <SOME VAR DEFINED ]]; then
<START SOME PROCESS>
else
sleep infinity
fi
So what happens now, is that the service goes to "sleep". I've found this solution in onme of the linuxserver Docker images.
I just did a pull on the new image and the constant write is gone. Seems like it was the S6 overlay indeed. Nice find!
It was good collaboration. Nice finding by you as well!
I believe the constant write problem is back. I run the docker on my Unraid machine. And when I do a 'iotop-c', the output is as follows:
As far as my knowledge goes, I believe the writing comes from the 3 dsmr-services.
The writing of the 3 times 11kb/s is constant. Is it possible to fix it again??
Thank you in advance.
Support guidelines
I've found an issue and checked that ...
Description
I'm seeing a constant 10/20 K/s data write by the process 's6-supervise dsmr_remote_datalogger' process and I cannot find out why.
Expected behaviour
With debug options enabled I'm seeing the correct behavior in the logs of DSMR: it's waiting for 10 seconds until a new telegram is received. I cannot find out why this process is writing constantly.
Actual behaviour
A constant stream of data being written. Although I cannot find out which files it are, following the pipe does not lead to any useful file path. It's not the database, it's not the postgres process
Steps to reproduce
Running the DSMR docker. After that run 'sudo htop'.
Docker info
Version
Docker compose version (type
docker-compose --version
): docker-compose version 1.28.5, build 24fb474eSystem info (type
uname -a
): Linux NAS 4.4.180+ #42962 SMP Wed Sep 21 10:56:47 CST 2022 x86_64 GNU/Linux synology_apollolake_218+Docker compose
Container logs
Additional info
I was curious if other docker users are seeing the same symptoms as I don't recall seeing it before. I don't know if it's a docker or DSMR issue, but I hope someone has a clue where I can find out what's the issue.