Open maxduke opened 1 year ago
thanks for the report, will have a look.
/usr/syno/etc/synocrond.config
is an auto-generated file, normally it can be regenerated (from *.conf files) just by rebooting NAS. But anyway it's better to fix this.
Also looks like it's time to add more tasks to the table of known task names.
although DSM itself does systemctl reload synocrond
to propagate synocrond configuration changes, seems that depending on synocrond state the reload may fail.
Using systemctl stop/start
pair fixes this problem, but I want to do few more tests.
so far the stop/start approach works good, I've committed the fix - please test
Thanks for fixing this.
2023-06-19 22:22:43,081 DEBUG run() triggered for the scheduled task
2023-06-19 22:22:43,261 DEBUG applying in-memory patches for pid 13542
2023-06-19 22:22:43,262 DEBUG successfully applied patch 'NVMe I/O HDD hibernation fix for DSM 7.2' for scemd
2023-06-19 22:22:43,359 DEBUG applying in-memory patches for pid 11664
2023-06-19 22:22:43,359 DEBUG successfully applied patch 'NVMe I/O HDD hibernation fix for DSM 7.2' for synostgd-disk
2023-06-19 22:22:43,361 DEBUG applying user config to task files: going to change /usr/syno/etc/synocron.d/autopkgupgrade.conf
2023-06-19 22:22:43,374 DEBUG applying user config to task files: going to change /usr/local/etc/synocron.d/ContainerManager.conf
2023-06-19 22:22:43,374 DEBUG applying user config to task files: going to change /usr/local/etc/synocron.d/CodecPack.conf
2023-06-19 22:22:43,375 DEBUG applying user config to task files: going to change /usr/local/etc/synocron.d/SynologyApplicationService.conf
2023-06-19 22:22:43,375 DEBUG applying user config to task files: going to change /usr/local/etc/synocron.d/DownloadStation.conf
2023-06-19 22:22:43,377 WARNING A new task builtin-dyn-myds-default has been found, likely installed by a new package or a DSM update. Re-run the script install to tell what to do with this task. Skipping it for now
2023-06-19 22:22:43,377 WARNING A new task pkg-ContainerManager-default has been found, likely installed by a new package or a DSM update. Re-run the script install to tell what to do with this task. Skipping it for now
2023-06-19 22:22:43,377 DEBUG going to change /usr/syno/etc/synocrond.config
Let's see if hibernation works on my NAS.
I checked the /var/log/hibernationFull.log, It seems there's a lot of WRITE on dm-3 prevent the hibernation. (docker is installed on nvme)
[68709.165688] ppid:1(systemd), pid:12220(dockerd), dirtied subvolume 256 inode 356512 (hostconfig.json) on dm-3
[68709.165727] ppid:1(systemd), pid:12220(dockerd), dirtied subvolume 256 inode 346153 (2926dd66a2814ca6eae24b330e7222d0cf788a761f2618ff6f70e29e55f3ae4c) on dm-3
[68709.165813] ppid:1(systemd), pid:12220(dockerd), WRITE block 10627632 on dm-3 (16 sectors)
[68709.165898] ppid:2(kthreadd), pid:12222(kworker/u17:3), dirtied subvolume 256 inode 356513 (.tmp-config.v2.json2178785419) on dm-3
[68709.166910] ppid:1(systemd), pid:12220(dockerd), WRITE block 598368 on dm-3 (32 sectors)
[68709.166921] ppid:1(systemd), pid:12220(dockerd), WRITE block 2695520 on dm-3 (32 sectors)
[68709.166940] ppid:1(systemd), pid:12220(dockerd), WRITE block 598432 on dm-3 (64 sectors)
[68709.166944] ppid:1(systemd), pid:12220(dockerd), WRITE block 2695584 on dm-3 (64 sectors)
[68709.166956] ppid:1(systemd), pid:12220(dockerd), WRITE block 598688 on dm-3 (32 sectors)
[68709.166959] ppid:1(systemd), pid:12220(dockerd), WRITE block 2695840 on dm-3 (32 sectors)
[68709.166970] ppid:1(systemd), pid:12220(dockerd), WRITE block 599616 on dm-3 (32 sectors)
[68709.166973] ppid:1(systemd), pid:12220(dockerd), WRITE block 2696768 on dm-3 (32 sectors)
[68709.166984] ppid:1(systemd), pid:12220(dockerd), WRITE block 600256 on dm-3 (32 sectors)
[68709.166987] ppid:1(systemd), pid:12220(dockerd), WRITE block 2697408 on dm-3 (32 sectors)
[68709.166997] ppid:1(systemd), pid:12220(dockerd), WRITE block 601216 on dm-3 (32 sectors)
[68709.167000] ppid:1(systemd), pid:12220(dockerd), WRITE block 2698368 on dm-3 (32 sectors)
[68709.167011] ppid:1(systemd), pid:12220(dockerd), WRITE block 612320 on dm-3 (32 sectors)
[68709.167014] ppid:1(systemd), pid:12220(dockerd), WRITE block 2709472 on dm-3 (32 sectors)
[68709.167024] ppid:1(systemd), pid:12220(dockerd), WRITE block 612448 on dm-3 (32 sectors)
[68709.167027] ppid:1(systemd), pid:12220(dockerd), WRITE block 2709600 on dm-3 (32 sectors)
FWIW - on a DS1019+ (volume1 are IronWolf Pro and volume2 are NVME where I moved Docker) and running on DSM7.2, I do see a noticeable drop in power consumption (more so probably due to moving Docker to NVME)
I do not see this type of behaviour on my system (running mostly *arr app and usage is during downloads and Plex streaming - so power consumption fairly low during the day)
I checked the /var/log/hibernationFull.log, It seems there's a lot of WRITE on dm-3 prevent the hibernation. (docker is installed on nvme)
Can you share your hibernationFull.log
? NVMe-only writes should allow HDDs to sleep with the script, probably some disk I/O goes to HDD(s). One idea for now if docker containers are installed on the NVMe volume, but docker logs are somehow left on HDD.
hibernationFull.log.zip Thank you. I uploaded one of the log.
interesting, someone resets the HDD hibernation timer every minute. Looks like the process SYNO.Core.Sharing
is the main offender, I will check why it can write to HDD so often (keeps updating sharing.db
file).
If dm-3
is your NVMe partition, you can ignore all writes to it in the logs. You can check to which volume dm-3
corresponds by running sudo dmsetup info /dev/dm-3 | grep -i Name
or, to show all,
sudo lvdisplay | awk '/LV Name/{n=$3} /Block device/{d=$3; sub(".*:","dm-",d); print d,n;}'
root@DiskStation:~# dmsetup info /dev/dm-3 | grep -i Name
Name: cachedev_0
root@DiskStation:~# lvdisplay | awk '/LV Name/{n=$3} /Block device/{d=$3; sub(".*:","dm-",d); print d,n;}'
dm-0 syno_vg_reserved_area
dm-1 volume_2
dm-2 lv
volume1 hdd volume2 nvme
ok, checked what was the issue - it is Synology Virtual Machine Monitor package. Looks like they made it even more bloatware-like than it was...
One of its running processes is synoccc_vnc_sha
. This process is responsible for "VNC share garbage collector". And this is how it works basically:
...
while ( !bVNCShareGarbageCollectorDisabled )
{
VNCShareGarbageCollector_core(); // updates sharing.db here
sleep(60u);
}
The 'disabled' flag can be set by signal 15 handler. But there are much more patterns like this in VMM, though normally with a different sleep interval, like an hour.
Those 1-minute wake ups from synoccc_vnc_sha
can be stopped using
kill -15 `pidof synoccc_vnc_sha`
The process will remain present, but won't do any harm:
root@NAS:~# ps -A | grep -i synoccc
15293 ? 00:00:00 synocccstated
16443 ? 00:00:00 synoccc_etcd_ca
16772 ? 00:00:00 synoccc_mem_cac
16783 ? 00:00:03 synocccd
17556 ? 00:00:00 synoccc_had
17559 ? 00:00:00 synoccc_vmevent
17561 ? 00:00:00 synoccc_guestd
17605 ? 00:00:00 synoccc_agenthe
17606 ? 00:00:00 synoccc_license
17607 ? 00:00:00 synoccc_replica
17608 ? 00:00:00 synoccc_polltas
17609 ? 00:00:00 synoccc_vnc_sha <defunct>
Note: this is a graceful stop as their signal 15 handler basically sets the 'disabled' flag, finishes execution of the last iteration in 1 min and then exits.
I'm going to implement VMM support for the script, starting with the most frequently triggered processes.
will track progress here: #3
Hello,
After reboot I found /usr/syno/etc/synocrond.config is a empty object.
Logs: