home-assistant / operating-system

:beginner: Home Assistant Operating System
Apache License 2.0
4.86k stars 965 forks source link

QEMU agent command 'guest-fsfreeze-freeze' issue with HAOS 12.1 #3251

Closed GrimD closed 6 months ago

GrimD commented 7 months ago

Describe the issue you are experiencing

With HAOS 12.1 when trying to snapshot the VM running HAOS for backup it failed with the error; "unable to execute QEMU agent command 'guest-fsfreeze-freeze': fsfreeze hook has failed with status 1". Tried shutting down the VM, starting it and trying the backup again but got the same error. The QEMU agent is running correctly in general. After restoring a backup of the VM so it is back to HAOS 12.0 the snapshot and therefore backup run fine again.

The hypervisor is KVM with in the Unraid OS.

What operating system image do you use?

ova (for Virtual Machines)

What version of Home Assistant Operating System is installed?

12.1

Did you upgrade the Operating System.

Yes

Steps to reproduce the issue

1.Upgrade to 12.1 2.Try to to back up the VM via the VM backup plugin in Unraid 3. ...

Anything in the Supervisor logs that might be useful for us?

I don't think so but here is the out put only a minute or so after the issue occurs;

24-03-14 19:34:02 WARNING (SyncWorker_0) [supervisor.host.sound] Can't update PulseAudio data: Failed to connect to pulseaudio server
24-03-14 19:34:02 WARNING (MainThread) [supervisor.host.network] Requested to update interface enp1s0 which does not exist or is disabled.
24-03-14 19:34:02 INFO (MainThread) [supervisor.host.apparmor] Loading AppArmor Profiles: {'hassio-supervisor'}
24-03-14 19:34:02 INFO (MainThread) [supervisor.mounts.manager] Initializing all user-configured mounts
24-03-14 19:34:02 INFO (MainThread) [supervisor.docker.monitor] Started docker events monitor
24-03-14 19:34:02 INFO (MainThread) [supervisor.updater] Fetching update data from https://version.home-assistant.io/stable.json
24-03-14 19:34:03 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/home-assistant/amd64-hassio-cli with version 2024.03.1
24-03-14 19:34:03 INFO (MainThread) [supervisor.plugins.cli] Starting CLI plugin
24-03-14 19:34:03 INFO (MainThread) [supervisor.docker.cli] Starting CLI ghcr.io/home-assistant/amd64-hassio-cli with version 2024.03.1 - 172.30.32.5
24-03-14 19:34:03 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/home-assistant/amd64-hassio-dns with version 2024.03.0
24-03-14 19:34:03 INFO (MainThread) [supervisor.plugins.dns] Starting CoreDNS plugin
24-03-14 19:34:03 INFO (MainThread) [supervisor.docker.dns] Starting DNS ghcr.io/home-assistant/amd64-hassio-dns with version 2024.03.0 - 172.30.32.3
24-03-14 19:34:03 INFO (MainThread) [supervisor.plugins.dns] Updated /etc/resolv.conf
24-03-14 19:34:03 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/home-assistant/amd64-hassio-audio with version 2023.12.0
24-03-14 19:34:03 INFO (MainThread) [supervisor.plugins.audio] Starting Audio plugin
24-03-14 19:34:04 INFO (MainThread) [supervisor.docker.audio] Starting Audio ghcr.io/home-assistant/amd64-hassio-audio with version 2023.12.0 - 172.30.32.4
24-03-14 19:34:04 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/home-assistant/amd64-hassio-observer with version 2023.06.0
24-03-14 19:34:04 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/home-assistant/amd64-hassio-multicast with version 2024.03.0
24-03-14 19:34:04 INFO (MainThread) [supervisor.plugins.multicast] Starting Multicast plugin
24-03-14 19:34:04 INFO (MainThread) [supervisor.docker.multicast] Starting Multicast ghcr.io/home-assistant/amd64-hassio-multicast with version 2024.03.0 - Host
24-03-14 19:34:04 INFO (MainThread) [supervisor.homeassistant.secrets] Loaded 1 Home Assistant secrets
24-03-14 19:34:04 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/home-assistant/qemux86-64-homeassistant with version 2024.3.1
24-03-14 19:34:04 INFO (MainThread) [supervisor.os.manager] Detect Home Assistant Operating System 12.1 / BootSlot A
24-03-14 19:34:04 INFO (MainThread) [supervisor.store.git] Loading add-on /data/addons/git/5c53de3b repository
24-03-14 19:34:04 INFO (MainThread) [supervisor.store.git] Loading add-on /data/addons/core repository
24-03-14 19:34:04 INFO (MainThread) [supervisor.store.git] Loading add-on /data/addons/git/f4f71350 repository
24-03-14 19:34:04 INFO (MainThread) [supervisor.store.git] Loading add-on /data/addons/git/a0d7b954 repository
24-03-14 19:34:05 INFO (MainThread) [supervisor.store] Loading add-ons from store: 78 all - 78 new - 0 remove
24-03-14 19:34:05 INFO (MainThread) [supervisor.addons.manager] Found 6 installed add-ons
24-03-14 19:34:05 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/hassio-addons/node-red/amd64 with version 17.0.9
24-03-14 19:34:05 INFO (MainThread) [supervisor.docker.interface] Attaching to homeassistant/amd64-addon-ssh with version 9.10.0
24-03-14 19:34:05 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/hassio-addons/vscode/amd64 with version 5.15.0
24-03-14 19:34:05 INFO (MainThread) [supervisor.docker.interface] Attaching to f4f71350/amd64-addon-ewelink_smart_home_slug with version 1.4.3
24-03-14 19:34:05 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/esphome/esphome-hassio with version 2024.2.2
24-03-14 19:34:05 INFO (MainThread) [supervisor.docker.interface] Attaching to ghcr.io/hassio-addons/zwave-js-ui/amd64 with version 3.4.1
24-03-14 19:34:05 INFO (MainThread) [supervisor.backups.manager] Found 46 backup files
24-03-14 19:34:06 INFO (MainThread) [supervisor.discovery] Loaded 2 messages
24-03-14 19:34:06 INFO (MainThread) [supervisor.ingress] Loaded 0 ingress sessions
24-03-14 19:34:06 INFO (MainThread) [supervisor.resolution.check] Starting system checks with state setup
24-03-14 19:34:06 INFO (MainThread) [supervisor.resolution.check] System checks complete
24-03-14 19:34:06 INFO (MainThread) [supervisor.resolution.evaluate] Starting system evaluation with state setup
24-03-14 19:34:06 INFO (MainThread) [supervisor.resolution.evaluate] System evaluation complete
24-03-14 19:34:06 INFO (MainThread) [supervisor.jobs] 'ResolutionFixup.run_autofix' blocked from execution, system is not running - setup
24-03-14 19:34:06 INFO (MainThread) [supervisor.resolution.evaluate] Starting system evaluation with state setup
24-03-14 19:34:06 INFO (MainThread) [supervisor.resolution.evaluate] System evaluation complete
24-03-14 19:34:06 INFO (MainThread) [__main__] Running Supervisor
24-03-14 19:34:06 INFO (MainThread) [supervisor.os.manager] Rauc: A - marked slot kernel.0 as good
24-03-14 19:34:06 INFO (MainThread) [supervisor.addons.manager] Phase 'initialize' starting 0 add-ons
24-03-14 19:34:06 INFO (MainThread) [supervisor.addons.manager] Phase 'system' starting 1 add-ons
24-03-14 19:34:06 INFO (MainThread) [supervisor.docker.addon] Starting Docker add-on ghcr.io/hassio-addons/zwave-js-ui/amd64 with version 3.4.1
24-03-14 19:34:36 INFO (MainThread) [supervisor.addons.manager] Phase 'services' starting 1 add-ons
24-03-14 19:34:37 INFO (MainThread) [supervisor.docker.addon] Starting Docker add-on ghcr.io/esphome/esphome-hassio with version 2024.2.2
24-03-14 19:34:42 INFO (MainThread) [supervisor.core] Start Home Assistant Core
24-03-14 19:34:42 INFO (SyncWorker_0) [supervisor.docker.manager] Starting homeassistant
24-03-14 19:34:42 INFO (MainThread) [supervisor.homeassistant.core] Wait until Home Assistant is ready
24-03-14 19:34:43 INFO (MainThread) [supervisor.resolution.evaluate] Starting system evaluation with state startup
24-03-14 19:34:43 INFO (MainThread) [supervisor.resolution.evaluate] System evaluation complete
24-03-14 19:34:47 INFO (MainThread) [supervisor.homeassistant.api] Updated Home Assistant API token
24-03-14 19:34:47 INFO (MainThread) [supervisor.homeassistant.core] Home Assistant Core state changed to NOT_RUNNING
24-03-14 19:34:57 INFO (MainThread) [supervisor.homeassistant.core] Home Assistant Core state changed to RUNNING
24-03-14 19:34:57 INFO (MainThread) [supervisor.homeassistant.core] Detect a running Home Assistant instance
24-03-14 19:34:57 INFO (MainThread) [supervisor.addons.manager] Phase 'application' starting 2 add-ons
24-03-14 19:34:57 INFO (MainThread) [supervisor.docker.addon] Starting Docker add-on ghcr.io/hassio-addons/node-red/amd64 with version 17.0.9
24-03-14 19:34:58 INFO (MainThread) [supervisor.docker.addon] Starting Docker add-on f4f71350/amd64-addon-ewelink_smart_home_slug with version 1.4.3
24-03-14 19:34:59 INFO (MainThread) [supervisor.api.proxy] Home Assistant WebSocket API request initialize
24-03-14 19:34:59 INFO (MainThread) [supervisor.api.proxy] Home Assistant WebSocket API request initialize
24-03-14 19:34:59 INFO (MainThread) [supervisor.api.proxy] WebSocket access from f4f71350_ewelink_smart_home_slug
24-03-14 19:34:59 INFO (MainThread) [supervisor.api.proxy] Home Assistant WebSocket API request running
24-03-14 19:35:01 INFO (MainThread) [supervisor.api.proxy] Home Assistant WebSocket API request initialize
24-03-14 19:35:01 INFO (MainThread) [supervisor.api.proxy] WebSocket access from f4f71350_ewelink_smart_home_slug
24-03-14 19:35:01 INFO (MainThread) [supervisor.api.proxy] Home Assistant WebSocket API request running
24-03-14 19:35:07 INFO (MainThread) [supervisor.api.proxy] Home Assistant WebSocket API request initialize
24-03-14 19:35:07 INFO (MainThread) [supervisor.api.proxy] WebSocket access from a0d7b954_nodered
24-03-14 19:35:07 INFO (MainThread) [supervisor.api.proxy] Home Assistant WebSocket API request running
24-03-14 19:35:27 INFO (MainThread) [supervisor.misc.tasks] All core tasks are scheduled
24-03-14 19:35:27 INFO (MainThread) [supervisor.core] Supervisor is up and running
24-03-14 19:35:27 INFO (MainThread) [supervisor.host.info] Updating local host information
24-03-14 19:35:27 INFO (MainThread) [supervisor.updater] Fetching update data from https://version.home-assistant.io/stable.json
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.check] Starting system checks with state running
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.checks.base] Run check for ipv4_connection_problem/system
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.checks.base] Run check for security/core
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.checks.base] Run check for free_space/system
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.checks.base] Run check for pwned/addon
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.checks.base] Run check for dns_server_failed/dns_server
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.checks.base] Run check for no_current_backup/system
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.module] Create new suggestion create_full_backup - system / None
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.module] Create new issue no_current_backup - system / None
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.checks.base] Run check for docker_config/system
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.checks.base] Run check for dns_server_ipv6_error/dns_server
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.checks.base] Run check for multiple_data_disks/system
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.checks.base] Run check for trust/supervisor
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.check] System checks complete
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.evaluate] Starting system evaluation with state running
24-03-14 19:35:27 INFO (MainThread) [supervisor.host.services] Updating service information
24-03-14 19:35:27 INFO (MainThread) [supervisor.host.network] Updating local network information
24-03-14 19:35:27 INFO (MainThread) [supervisor.host.sound] Updating PulseAudio information
24-03-14 19:35:27 INFO (MainThread) [supervisor.host.manager] Host information reload completed
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.evaluate] System evaluation complete
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.fixup] Starting system autofix at state running
24-03-14 19:35:27 INFO (MainThread) [supervisor.resolution.fixup] System autofix complete

Anything in the Host logs that might be useful for us?

I don't think so but here is the out put only a minute or so after the issue occurs;

Mar 14 19:34:04 DBHA01 systemd[1]: Bluetooth service was skipped because of an unmet condition check (ConditionPathIsDirectory=/sys/class/bluetooth).
Mar 14 19:34:06 DBHA01 os-agent[156]: INFO: 2024/03/14 19:34:06 main.go:95: Diagnostics is now true
Mar 14 19:34:06 DBHA01 systemd[1]: var-lib-docker-overlay2-d47359532d23a4634dfce17de424411c70ab5781aa21f59f0feff2f26cfde450\x2dinit-merged.mount: Deactivated successfully.
Mar 14 19:34:06 DBHA01 systemd[1]: mnt-data-docker-overlay2-d47359532d23a4634dfce17de424411c70ab5781aa21f59f0feff2f26cfde450\x2dinit-merged.mount: Deactivated successfully.
Mar 14 19:34:06 DBHA01 kernel: hassio: port 6(veth31abb86) entered blocking state
Mar 14 19:34:06 DBHA01 kernel: hassio: port 6(veth31abb86) entered disabled state
Mar 14 19:34:06 DBHA01 kernel: veth31abb86: entered allmulticast mode
Mar 14 19:34:06 DBHA01 kernel: veth31abb86: entered promiscuous mode
Mar 14 19:34:06 DBHA01 NetworkManager[311]: <info>  [1710444846.5600] manager: (veth802255f): new Veth device (/org/freedesktop/NetworkManager/Devices/17)
Mar 14 19:34:06 DBHA01 NetworkManager[311]: <info>  [1710444846.5608] manager: (veth31abb86): new Veth device (/org/freedesktop/NetworkManager/Devices/18)
Mar 14 19:34:06 DBHA01 systemd[1]: Started libcontainer container e84ba2ef5eb89bc1a5ba92179cca9bcbf199871b45c49a1410b5a45108d10cbd.
Mar 14 19:34:06 DBHA01 kernel: eth0: renamed from veth802255f
Mar 14 19:34:06 DBHA01 kernel: hassio: port 6(veth31abb86) entered blocking state
Mar 14 19:34:06 DBHA01 kernel: hassio: port 6(veth31abb86) entered forwarding state
Mar 14 19:34:06 DBHA01 NetworkManager[311]: <info>  [1710444846.7201] device (veth31abb86): carrier: link connected
Mar 14 19:34:09 DBHA01 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Mar 14 19:34:32 DBHA01 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Mar 14 19:34:32 DBHA01 kernel: kauditd_printk_skb: 213 callbacks suppressed
Mar 14 19:34:32 DBHA01 kernel: audit: type=1334 audit(1710444872.852:186): prog-id=14 op=UNLOAD
Mar 14 19:34:32 DBHA01 kernel: audit: type=1334 audit(1710444872.852:187): prog-id=13 op=UNLOAD
Mar 14 19:34:32 DBHA01 kernel: audit: type=1334 audit(1710444872.852:188): prog-id=12 op=UNLOAD
Mar 14 19:34:32 DBHA01 systemd[1]: systemd-timedated.service: Deactivated successfully.
Mar 14 19:34:32 DBHA01 kernel: audit: type=1334 audit(1710444872.911:189): prog-id=25 op=UNLOAD
Mar 14 19:34:32 DBHA01 kernel: audit: type=1334 audit(1710444872.911:190): prog-id=24 op=UNLOAD
Mar 14 19:34:32 DBHA01 kernel: audit: type=1334 audit(1710444872.911:191): prog-id=23 op=UNLOAD
Mar 14 19:34:36 DBHA01 systemd[1]: var-lib-docker-overlay2-e3f7c5e767ef6fa1ee15f65c0c8cd9887150cf5bf9507f926df8380bbf410d65\x2dinit-merged.mount: Deactivated successfully.
Mar 14 19:34:36 DBHA01 systemd[1]: mnt-data-docker-overlay2-e3f7c5e767ef6fa1ee15f65c0c8cd9887150cf5bf9507f926df8380bbf410d65\x2dinit-merged.mount: Deactivated successfully.
Mar 14 19:34:36 DBHA01 systemd[1]: Started libcontainer container 488dab42c7fd662ebe5104e491c9daddede85c725169764be3281f18a1af2d6b.
Mar 14 19:34:36 DBHA01 kernel: audit: type=1334 audit(1710444876.975:192): prog-id=46 op=LOAD
Mar 14 19:34:36 DBHA01 kernel: audit: type=1334 audit(1710444876.976:193): prog-id=47 op=LOAD
Mar 14 19:34:36 DBHA01 kernel: audit: type=1300 audit(1710444876.976:193): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2071 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=unconfined key=(null)
Mar 14 19:34:36 DBHA01 kernel: audit: type=1327 audit(1710444876.976:193): proctitle=72756E63002D2D726F6F74002F7661722F72756E2F646F636B65722F72756E74696D652D72756E632F6D6F6279002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6D6F62792F34383864616234326337666436363265626535313034653439
Mar 14 19:34:42 DBHA01 systemd[1]: Started libcontainer container ed9403d54863ddd32bce6c2a7d8471c2158ef9b876c74ffb00648679a8668f1e.
Mar 14 19:34:42 DBHA01 kernel: kauditd_printk_skb: 12 callbacks suppressed
Mar 14 19:34:42 DBHA01 kernel: audit: type=1334 audit(1710444882.112:198): prog-id=50 op=LOAD
Mar 14 19:34:42 DBHA01 kernel: audit: type=1300 audit(1710444882.112:198): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2541 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=unconfined key=(null)
Mar 14 19:34:42 DBHA01 kernel: audit: type=1327 audit(1710444882.112:198): proctitle=72756E63002D2D726F6F74002F7661722F72756E2F646F636B65722F72756E74696D652D72756E632F6D6F6279002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6D6F62792F65643934303364353438363364646433326263653663326137
Mar 14 19:34:42 DBHA01 kernel: audit: type=1334 audit(1710444882.112:199): prog-id=51 op=LOAD
Mar 14 19:34:42 DBHA01 kernel: audit: type=1300 audit(1710444882.112:199): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2541 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=unconfined key=(null)
Mar 14 19:34:42 DBHA01 kernel: audit: type=1327 audit(1710444882.112:199): proctitle=72756E63002D2D726F6F74002F7661722F72756E2F646F636B65722F72756E74696D652D72756E632F6D6F6279002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6D6F62792F65643934303364353438363364646433326263653663326137
Mar 14 19:34:42 DBHA01 kernel: audit: type=1334 audit(1710444882.112:200): prog-id=51 op=UNLOAD
Mar 14 19:34:42 DBHA01 kernel: audit: type=1300 audit(1710444882.112:200): arch=c000003e syscall=3 success=yes exit=0 a0=11 a1=0 a2=0 a3=0 items=0 ppid=2541 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=unconfined key=(null)
Mar 14 19:34:42 DBHA01 kernel: audit: type=1327 audit(1710444882.112:200): proctitle=72756E63002D2D726F6F74002F7661722F72756E2F646F636B65722F72756E74696D652D72756E632F6D6F6279002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6D6F62792F65643934303364353438363364646433326263653663326137
Mar 14 19:34:42 DBHA01 kernel: audit: type=1334 audit(1710444882.112:201): prog-id=50 op=UNLOAD
Mar 14 19:34:44 DBHA01 systemd[1]: Bluetooth service was skipped because of an unmet condition check (ConditionPathIsDirectory=/sys/class/bluetooth).
Mar 14 19:34:44 DBHA01 kernel: Bluetooth: Core ver 2.22
Mar 14 19:34:44 DBHA01 kernel: NET: Registered PF_BLUETOOTH protocol family
Mar 14 19:34:44 DBHA01 kernel: Bluetooth: HCI device and connection manager initialized
Mar 14 19:34:44 DBHA01 kernel: Bluetooth: HCI socket layer initialized
Mar 14 19:34:44 DBHA01 kernel: Bluetooth: L2CAP socket layer initialized
Mar 14 19:34:44 DBHA01 kernel: Bluetooth: SCO socket layer initialized
Mar 14 19:34:57 DBHA01 systemd[1]: var-lib-docker-overlay2-e4e7043df76849161ea35669ae4c44956604c1c64b02877609da2852c75d26d1\x2dinit-merged.mount: Deactivated successfully.
Mar 14 19:34:57 DBHA01 systemd[1]: mnt-data-docker-overlay2-e4e7043df76849161ea35669ae4c44956604c1c64b02877609da2852c75d26d1\x2dinit-merged.mount: Deactivated successfully.
Mar 14 19:34:57 DBHA01 systemd[1]: Started libcontainer container 0ebe00e6b5da31be656658965834aee6d292cadc2efa414a2ecb379fe833c8a3.
Mar 14 19:34:57 DBHA01 kernel: kauditd_printk_skb: 5 callbacks suppressed
Mar 14 19:34:57 DBHA01 kernel: audit: type=1334 audit(1710444897.560:203): prog-id=53 op=LOAD
Mar 14 19:34:57 DBHA01 kernel: audit: type=1334 audit(1710444897.560:204): prog-id=54 op=LOAD
Mar 14 19:34:57 DBHA01 kernel: audit: type=1300 audit(1710444897.560:204): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2716 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=unconfined key=(null)
Mar 14 19:34:57 DBHA01 kernel: audit: type=1327 audit(1710444897.560:204): proctitle=72756E63002D2D726F6F74002F7661722F72756E2F646F636B65722F72756E74696D652D72756E632F6D6F6279002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6D6F62792F30656265303065366235646133316265363536363538393635
Mar 14 19:34:57 DBHA01 kernel: audit: type=1334 audit(1710444897.560:205): prog-id=55 op=LOAD
Mar 14 19:34:57 DBHA01 kernel: audit: type=1300 audit(1710444897.560:205): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2716 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=unconfined key=(null)
Mar 14 19:34:57 DBHA01 kernel: audit: type=1327 audit(1710444897.560:205): proctitle=72756E63002D2D726F6F74002F7661722F72756E2F646F636B65722F72756E74696D652D72756E632F6D6F6279002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6D6F62792F30656265303065366235646133316265363536363538393635
Mar 14 19:34:57 DBHA01 kernel: audit: type=1334 audit(1710444897.560:206): prog-id=55 op=UNLOAD
Mar 14 19:34:57 DBHA01 kernel: audit: type=1300 audit(1710444897.560:206): arch=c000003e syscall=3 success=yes exit=0 a0=12 a1=0 a2=0 a3=0 items=0 ppid=2716 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=unconfined key=(null)
Mar 14 19:34:57 DBHA01 kernel: audit: type=1327 audit(1710444897.560:206): proctitle=72756E63002D2D726F6F74002F7661722F72756E2F646F636B65722F72756E74696D652D72756E632F6D6F6279002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6D6F62792F30656265303065366235646133316265363536363538393635
Mar 14 19:34:57 DBHA01 kernel: hassio: port 7(veth943cb12) entered blocking state
Mar 14 19:34:57 DBHA01 kernel: hassio: port 7(veth943cb12) entered disabled state
Mar 14 19:34:57 DBHA01 kernel: veth943cb12: entered allmulticast mode
Mar 14 19:34:57 DBHA01 kernel: veth943cb12: entered promiscuous mode
Mar 14 19:34:57 DBHA01 NetworkManager[311]: <info>  [1710444897.7732] manager: (vetha31bc63): new Veth device (/org/freedesktop/NetworkManager/Devices/19)
Mar 14 19:34:57 DBHA01 NetworkManager[311]: <info>  [1710444897.7740] manager: (veth943cb12): new Veth device (/org/freedesktop/NetworkManager/Devices/20)
Mar 14 19:34:57 DBHA01 systemd[1]: Started libcontainer container 7fb7985a067b046dbba4745461f06c12d820df447bd987b9976e96a2834c11d6.
Mar 14 19:34:58 DBHA01 kernel: eth0: renamed from vetha31bc63
Mar 14 19:34:58 DBHA01 kernel: hassio: port 7(veth943cb12) entered blocking state
Mar 14 19:34:58 DBHA01 kernel: hassio: port 7(veth943cb12) entered forwarding state
Mar 14 19:34:58 DBHA01 NetworkManager[311]: <info>  [1710444898.0143] device (veth943cb12): carrier: link connected
Mar 14 19:35:27 DBHA01 kernel: kauditd_printk_skb: 58 callbacks suppressed
Mar 14 19:35:27 DBHA01 kernel: audit: type=1334 audit(1710444927.632:227): prog-id=61 op=LOAD
Mar 14 19:35:27 DBHA01 kernel: audit: type=1334 audit(1710444927.632:228): prog-id=62 op=LOAD
Mar 14 19:35:27 DBHA01 kernel: audit: type=1334 audit(1710444927.633:229): prog-id=63 op=LOAD
Mar 14 19:35:27 DBHA01 systemd[1]: Starting Hostname Service...
Mar 14 19:35:27 DBHA01 systemd[1]: Started Hostname Service.
Mar 14 19:35:27 DBHA01 kernel: audit: type=1334 audit(1710444927.771:230): prog-id=64 op=LOAD
Mar 14 19:35:27 DBHA01 kernel: audit: type=1334 audit(1710444927.771:231): prog-id=65 op=LOAD
Mar 14 19:35:27 DBHA01 kernel: audit: type=1334 audit(1710444927.771:232): prog-id=66 op=LOAD
Mar 14 19:35:27 DBHA01 systemd[1]: Starting Time & Date Service...
Mar 14 19:35:27 DBHA01 systemd[1]: Started Time & Date Service.
Mar 14 19:35:57 DBHA01 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Mar 14 19:35:57 DBHA01 kernel: audit: type=1334 audit(1710444957.796:233): prog-id=63 op=UNLOAD
Mar 14 19:35:57 DBHA01 kernel: audit: type=1334 audit(1710444957.796:234): prog-id=62 op=UNLOAD
Mar 14 19:35:57 DBHA01 kernel: audit: type=1334 audit(1710444957.796:235): prog-id=61 op=UNLOAD
Mar 14 19:35:57 DBHA01 systemd[1]: systemd-timedated.service: Deactivated successfully.
Mar 14 19:35:57 DBHA01 kernel: audit: type=1334 audit(1710444957.925:236): prog-id=66 op=UNLOAD
Mar 14 19:35:57 DBHA01 kernel: audit: type=1334 audit(1710444957.925:237): prog-id=65 op=UNLOAD
Mar 14 19:35:57 DBHA01 kernel: audit: type=1334 audit(1710444957.925:238): prog-id=64 op=UNLOAD
Mar 14 19:39:58 DBHA01 systemd[1]: run-docker-runtime\x2drunc-moby-0ebe00e6b5da31be656658965834aee6d292cadc2efa414a2ecb379fe833c8a3-runc.yhokBa.mount: Deactivated successfully.
Mar 14 19:41:33 DBHA01 kernel: audit: type=1334 audit(1710445293.920:239): prog-id=67 op=LOAD
Mar 14 19:41:33 DBHA01 systemd-timesyncd[370]: Network configuration changed, trying to establish connection.
Mar 14 19:41:33 DBHA01 systemd[1]: Started Journal Gateway Service.
Mar 14 19:41:33 DBHA01 systemd-timesyncd[370]: Contacted time server 162.159.200.1:123 (time.cloudflare.com).
Mar 14 19:41:34 DBHA01 systemd-journal-gatewayd[3751]: microhttpd: MHD_OPTION_EXTERNAL_LOGGER is not the first option specified for the daemon. Some messages may be printed by the standard MHD logger.

System information

System Information

version core-2024.3.1
installation_type Home Assistant OS
dev false
hassio true
docker true
user root
virtualenv false
python_version 3.12.2
os_name Linux
os_version 6.6.20-haos
arch x86_64
timezone Europe/London
config_dir /config
Home Assistant Community Store GitHub API | ok -- | -- GitHub Content | ok GitHub Web | ok GitHub API Calls Remaining | 4913 Installed Version | 1.34.0 Stage | running Available Repositories | 1399 Downloaded Repositories | 4
Home Assistant Cloud logged_in | true -- | -- subscription_expiration | 15 March 2024 at 00:00 relayer_connected | true relayer_region | eu-central-1 remote_enabled | true remote_connected | true alexa_enabled | true google_enabled | true remote_server | eu-central-1-10.ui.nabu.casa certificate_status | ready instance_id | 45fd9f3fa0764ccca8f73de93c3e5ea3 can_reach_cert_server | ok can_reach_cloud_auth | ok can_reach_cloud | ok
Home Assistant Supervisor host_os | Home Assistant OS 12.1 -- | -- update_channel | stable supervisor_version | supervisor-2024.03.0 agent_version | 1.6.0 docker_version | 24.0.7 disk_total | 30.8 GB disk_used | 9.8 GB healthy | true supported | true board | ova supervisor_api | ok version_api | ok installed_addons | Node-RED (17.0.9), Terminal & SSH (9.10.0), eWeLink Smart Home (1.4.3), Studio Code Server (5.15.0), Z-Wave JS UI (3.4.1), ESPHome (2024.2.2)
Dashboards dashboards | 3 -- | -- resources | 0 views | 15 mode | storage
Recorder oldest_recorder_run | 2 March 2024 at 13:57 -- | -- current_recorder_run | 14 March 2024 at 19:34 estimated_db_size | 373.99 MiB database_engine | sqlite database_version | 3.44.2

Additional information

Here is the relevant section from the VM Backup plugin log; 2024-03-14 19:39:54 information: DBHA01 can be found on the system. attempting backup. 2024-03-14 19:39:54 information: creating local DBHA01.xml to work with during backup. 2024-03-14 19:39:54 information: /mnt/user/Backups/Unraid/VMs/DBHA01 exists. continuing. 2024-03-14 19:39:54 information: skip_vm_shutdown is false and use_snapshots is 1. skipping vm shutdown procedure. DBHA01 is running. can_backup_vm set to y. 2024-03-14 19:39:54 information: actually_copy_files is 1. 2024-03-14 19:39:54 information: can_backup_vm flag is y. starting backup of DBHA01 configuration, nvram, and vdisk(s). sending incremental file list DBHA01.xml

sent 6,401 bytes received 35 bytes 12,872.00 bytes/sec total size is 6,294 speedup is 0.98 2024-03-14 19:39:55 information: copy of DBHA01.xml to /mnt/user/Backups/Unraid/VMs/DBHA01/20240314_1936_DBHA01.xml complete. sending incremental file list 43c3c14f-bbb0-e3b1-af5f-fc81c78df426_VARS-pure-efi-tpm.fd

sent 540,951 bytes received 35 bytes 1,081,972.00 bytes/sec total size is 540,672 speedup is 1.00 2024-03-14 19:39:55 information: copy of /etc/libvirt/qemu/nvram/43c3c14f-bbb0-e3b1-af5f-fc81c78df426_VARS-pure-efi-tpm.fd to /mnt/user/Backups/Unraid/VMs/DBHA01/20240314_1936_43c3c14f-bbb0-e3b1-af5f-fc81c78df426_VARS-pure-efi-tpm.fd complete. 2024-03-14 19:39:55 information: able to perform snapshot for disk /mnt/user/domains/DBHA01/vdisk1.img on DBHA01. use_snapshots is 1. vm_state is running. vdisk_type is qcow2 2024-03-14 19:39:55 information: qemu agent found. enabling quiesce on snapshot. error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': fsfreeze hook has failed with status 1

2024-03-14 19:39:55 failure: snapshot command failed on vdisk1.snap for DBHA01. 2024-03-14 19:39:57 failure: snapshot_fallback is 0. skipping backup for DBHA01 to prevent data loss. no cleanup will be performed for this vm.

artifactdev commented 7 months ago

Since the new Version I also receive this error in connection to Unraid with VM Backup Plugin for my HAOS VM

Soleima77 commented 7 months ago

After upgrading til 12.1 full snapshot on Synology Virtual Machine Manager is not possible. It can only make "Crash Consistant" snapshot.

Baxxy13 commented 7 months ago

No problems "snapshotting" the running HA-OS-VM (12.1) with proxmox.

vzdump - log ``` INFO: starting new backup job: vzdump 105 --node PVE-N100-229 --storage ssd-512gb --compress zstd --notes-template '{{guestname}}' --mode snapshot --remove 0 --notification-mode auto INFO: Starting Backup of VM 105 (qemu) INFO: Backup started at 2024-03-15 12:54:08 INFO: status = running INFO: VM Name: HAOS-Live-VM-211 INFO: include disk 'scsi0' 'local-lvm:vm-105-disk-1' 32G INFO: include disk 'efidisk0' 'local-lvm:vm-105-disk-0' 4M INFO: backup mode: snapshot INFO: ionice priority: 7 INFO: creating vzdump archive '/mnt/pve/ssd-512gb/dump/vzdump-qemu-105-2024_03_15-12_54_08.vma.zst' INFO: issuing guest-agent 'fs-freeze' command INFO: issuing guest-agent 'fs-thaw' command INFO: started backup task 'b19bd217-9adf-4823-96d5-6f9b2180543d' INFO: resuming VM again INFO: 2% (982.4 MiB of 32.0 GiB) in 3s, read: 327.5 MiB/s, write: 232.8 MiB/s INFO: 6% (2.1 GiB of 32.0 GiB) in 6s, read: 373.6 MiB/s, write: 258.8 MiB/s INFO: 12% (4.0 GiB of 32.0 GiB) in 9s, read: 673.6 MiB/s, write: 200.4 MiB/s INFO: 16% (5.2 GiB of 32.0 GiB) in 12s, read: 390.6 MiB/s, write: 177.5 MiB/s INFO: 17% (5.5 GiB of 32.0 GiB) in 15s, read: 113.6 MiB/s, write: 107.3 MiB/s INFO: 20% (6.5 GiB of 32.0 GiB) in 18s, read: 323.8 MiB/s, write: 130.0 MiB/s INFO: 21% (7.0 GiB of 32.0 GiB) in 21s, read: 177.6 MiB/s, write: 152.2 MiB/s INFO: 22% (7.3 GiB of 32.0 GiB) in 24s, read: 122.0 MiB/s, write: 120.3 MiB/s INFO: 28% (9.0 GiB of 32.0 GiB) in 27s, read: 564.3 MiB/s, write: 107.8 MiB/s INFO: 29% (9.3 GiB of 32.0 GiB) in 30s, read: 122.0 MiB/s, write: 121.0 MiB/s INFO: 31% (9.9 GiB of 32.0 GiB) in 33s, read: 200.6 MiB/s, write: 114.4 MiB/s INFO: 34% (11.0 GiB of 32.0 GiB) in 36s, read: 368.5 MiB/s, write: 99.1 MiB/s INFO: 37% (12.1 GiB of 32.0 GiB) in 39s, read: 379.0 MiB/s, write: 95.2 MiB/s INFO: 40% (13.1 GiB of 32.0 GiB) in 42s, read: 318.2 MiB/s, write: 139.4 MiB/s INFO: 42% (13.5 GiB of 32.0 GiB) in 45s, read: 155.6 MiB/s, write: 136.0 MiB/s INFO: 43% (14.0 GiB of 32.0 GiB) in 48s, read: 179.1 MiB/s, write: 147.0 MiB/s INFO: 45% (14.6 GiB of 32.0 GiB) in 51s, read: 182.5 MiB/s, write: 114.3 MiB/s INFO: 46% (15.0 GiB of 32.0 GiB) in 54s, read: 147.4 MiB/s, write: 120.4 MiB/s INFO: 48% (15.6 GiB of 32.0 GiB) in 57s, read: 193.0 MiB/s, write: 122.3 MiB/s INFO: 50% (16.1 GiB of 32.0 GiB) in 1m, read: 173.2 MiB/s, write: 170.1 MiB/s INFO: 54% (17.5 GiB of 32.0 GiB) in 1m 3s, read: 496.6 MiB/s, write: 126.2 MiB/s INFO: 56% (17.9 GiB of 32.0 GiB) in 1m 6s, read: 135.3 MiB/s, write: 124.9 MiB/s INFO: 57% (18.3 GiB of 32.0 GiB) in 1m 10s, read: 92.4 MiB/s, write: 91.8 MiB/s INFO: 58% (18.7 GiB of 32.0 GiB) in 1m 14s, read: 97.0 MiB/s, write: 94.4 MiB/s INFO: 59% (19.1 GiB of 32.0 GiB) in 1m 17s, read: 142.6 MiB/s, write: 104.0 MiB/s INFO: 61% (19.6 GiB of 32.0 GiB) in 1m 20s, read: 187.3 MiB/s, write: 149.2 MiB/s INFO: 63% (20.5 GiB of 32.0 GiB) in 1m 23s, read: 285.2 MiB/s, write: 120.2 MiB/s INFO: 65% (20.9 GiB of 32.0 GiB) in 1m 26s, read: 149.8 MiB/s, write: 119.4 MiB/s INFO: 66% (21.4 GiB of 32.0 GiB) in 1m 29s, read: 166.8 MiB/s, write: 163.0 MiB/s INFO: 68% (21.9 GiB of 32.0 GiB) in 1m 32s, read: 169.2 MiB/s, write: 165.0 MiB/s INFO: 71% (23.0 GiB of 32.0 GiB) in 1m 35s, read: 368.5 MiB/s, write: 106.2 MiB/s INFO: 72% (23.4 GiB of 32.0 GiB) in 1m 38s, read: 130.5 MiB/s, write: 111.9 MiB/s INFO: 74% (23.9 GiB of 32.0 GiB) in 1m 41s, read: 182.2 MiB/s, write: 110.2 MiB/s INFO: 76% (24.6 GiB of 32.0 GiB) in 1m 44s, read: 228.0 MiB/s, write: 171.7 MiB/s INFO: 78% (25.0 GiB of 32.0 GiB) in 1m 47s, read: 154.0 MiB/s, write: 124.6 MiB/s INFO: 79% (25.3 GiB of 32.0 GiB) in 1m 50s, read: 110.0 MiB/s, write: 108.6 MiB/s INFO: 80% (25.7 GiB of 32.0 GiB) in 1m 53s, read: 123.2 MiB/s, write: 123.1 MiB/s INFO: 81% (26.2 GiB of 32.0 GiB) in 1m 56s, read: 186.1 MiB/s, write: 185.7 MiB/s INFO: 83% (26.8 GiB of 32.0 GiB) in 1m 59s, read: 201.8 MiB/s, write: 140.0 MiB/s INFO: 91% (29.4 GiB of 32.0 GiB) in 2m 2s, read: 880.9 MiB/s, write: 41.6 MiB/s INFO: 100% (32.0 GiB of 32.0 GiB) in 2m 5s, read: 886.4 MiB/s, write: 42.3 MiB/s INFO: backup is sparse: 16.03 GiB (50%) total zero data INFO: transferred 32.00 GiB in 125 seconds (262.1 MiB/s) INFO: archive file size: 7.67GB INFO: adding notes to backup INFO: Finished Backup of VM 105 (00:02:06) INFO: Backup finished at 2024-03-15 12:56:14 INFO: Backup job finished successfully INFO: notified via target `mail-to-root` TASK OK ```
felge20000 commented 7 months ago

After upgrading til 12.1 full snapshot on Synology Virtual Machine Manager is not possible. It can only make "Crash Consistant" snapshot.

Same here, guest tools are running. filesystem-consistent snapshots aber broken on my Synology VMM since last HAOS Update. As I am reading Synology also uses the QEMU agent

agners commented 7 months ago

@GrimD @felge20000 can you check journalctl from the system after taking the snapshot? Use login on the VM terminal to get access to the OS shell, then run:

journalctl -u qemu-guest.service

As other reported, on Proxmox, it seems the freeze works nicely with HAOS 12.1:

# journalctl -f -u qemu-guest.service
Mar 15 14:35:20 homeassistant systemd[1]: Started QEMU Guest Agent.
Mar 15 14:38:31 ha-virt-proxmox qemu-ga[358]: info: guest-ping called
Mar 15 14:38:31 ha-virt-proxmox qemu-ga[358]: info: guest-fsfreeze called
Mar 15 14:38:31 ha-virt-proxmox qemu-ga[358]: info: executing fsfreeze hook with arg 'freeze'
Mar 15 14:38:31 ha-virt-proxmox qemu-ga[358]: info: executing fsfreeze hook with arg 'thaw'
felge20000 commented 7 months ago

@agners Thanks for providing the commands, I'd be at a loss here

so the output on my system is exactly as you posted, but the time between freezing and thawing is reduced to 0-1s instead of the 10-15s while it was working. So the service gets called and seems to think everything's ok?

Working: freeze

After 12.1 Update, not working flashfreeze

felge20000 commented 7 months ago

From the VMM log: Warning,2024/03/15 16:09:31,USER ,Took a snapshot [GMT-2024.03.15-15.09.23] from virtual machine [Home Assistant] by [USER] without filesystem consistency. Reason: [Filesystem failed to freeze because the virtual machine is busy]

agners commented 7 months ago

So the service gets called and seems to think everything's ok?

Hm, yes this looks as if all is good. You can also call the service calls explicitly using:

/usr/libexec/haos-freeze-hook freeze; echo $?
/usr/libexec/haos-freeze-hook thaw; echo $?

(call both, otherwise the system will stay in freeze.

agners commented 7 months ago

Warning,2024/03/15 16:09:31,USER ,Took a snapshot [GMT-2024.03.15-15.09.23] from virtual machine [Home Assistant] by [USER] without filesystem consistency. Reason: [Filesystem failed to freeze because the virtual machine is busy]

@felge20000 hm, I don't think that your case is related to the snapshot freeze feature really. This seems to be a Synology/Hypervisor related issue to me.

felge20000 commented 7 months ago

Yepp, freezing and thawing manually also worked. So yeah, seems like the hypervisor and the vm don't like to talk to each other any more since the update. But I guess that's really a separated issue from the OP's. I was just "relieved" to see from @Soleima77 I'm not the only one with synology and snapshot problems :) So sorry for taking over this topic @GrimD

GrimD commented 7 months ago

Yepp, freezing and thawing manually also worked. So yeah, seems like the hypervisor and the vm don't like to talk to each other any more since the update. But I guess that's really a separated issue from the OP's. I was just "relieved" to see from @Soleima77 I'm not the only one with synology and snapshot problems :) So sorry for taking over this topic @GrimD

@felge20000, Not sure sounds simular to me but kind of out of my depth as not much of a Linux tech. All I know is upgrading to HAOS 12.1 breaks it but it works fine with 12. After initially restoring to 12.0 which fixed it I then allowed it to upgrade again to double-check it broke (it did) and to get all the logs etc to log this. I then restored the VM back to before the 12.1 update again (so back to 12.0) and its happy once more.

@agners I'll need to find some time to install 12.1 again to try what you have asked but as I have upgraded it to 12.1 twice and it breaks then restoring the whole VM disk back to before the update resolves it, it certainly seems to be something in the update that's killing it. Will try to test ASAP

GrimD commented 7 months ago

@agners Well I ended up just doing it now as thinking about it, it doesn't take long to update and then restore it again so here is the result: Screenshot 2024-03-15 172334

Seems like it issues the freeze and then that's it, left it about 10 mins and it still shows the same. You can see the successful freeze\thaw from my backup last night on version 12.0

viliks commented 6 months ago

After upgrading til 12.1 full snapshot on Synology Virtual Machine Manager is not possible. It can only make "Crash Consistant" snapshot.

Also had this poblem. Reverted to 12.0 and snapshot creation is ok now.

genezig commented 6 months ago

Same problem here with HAOS 12.1. Debian bookworm host, QEMU/KVM virtualization. Trying to create snapshot of vm with virsh snapshot-create-as --quiesce --disk-only results in error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': fsfreeze hook has failed with status 1 from virsh. journalctl -u qemu-agent output on HAOS guest console quite similar to the above quoted by [felge20000] (thaw only 1 sec after freeze, too fast). Tried changing virtual disk attachment to vm from sata to virtio - no help. According HAOS changelog last qemu guest agent update (to 8.0.5) happened in 11.1 so this should not be the cause as 12.0 is not throwing this error. Maybe a problem related to kernel 6.6.20, should not be limited to HAOS guests then - but I did not find anything related on the net.

joggs commented 6 months ago

Same here with 12.1

TheFitzZZ commented 6 months ago

Same issue with 12.1 running on unraid kvm. Please ping me if I should run anything that might help figure this one out.

samcrang commented 6 months ago

I upgraded to 12.2 and my backups ran successfully.

Thanks for fixing this.

GrimD commented 6 months ago

All good here too on 12.2

Thank you.

felge20000 commented 6 months ago

My Synology vm ist also snapshotting happily since 12.2 Thanks to everyone involved <3

VNRARA commented 2 months ago

I've got this same issue, but with 12.4 for some reason with HA as a proxmox VM.