Closed BernhardOnline closed 1 year ago
one precision: same error with wyng 0.4alpha3 release 20230524 (and identical line numbers!)
Sorry for not moving in the wyng 0.4alpha3 thread, I discovered that one afterwards.
@eric-blabla If you run with --debug
then it should leave behind a /tmp/wyng-debug folder, which will contain an err.log file.
Indeed. And it is worrying! Is something bad with the thin-LVM ??
--+--
--+--
--+--
--+--
--+--
WARNING: Sum of all thin volume sizes (<1.56 TiB) exceeds the size of thin pools and the size of whole volume group (469.45 GiB).
--+--
WARNING: Sum of all thin volume sizes (<1.56 TiB) exceeds the size of thin pools and the size of whole volume group (469.45 GiB).
--+--
WARNING: Sum of all thin volume sizes (<1.56 TiB) exceeds the size of thin pools and the size of whole volume group (469.45 GiB).
--+--
WARNING: Sum of all thin volume sizes (<1.56 TiB) exceeds the size of thin pools and the size of whole volume group (469.45 GiB).
--+--
WARNING: Sum of all thin volume sizes (<1.56 TiB) exceeds the size of thin pools and the size of whole volume group (469.45 GiB).
--+--
WARNING: Sum of all thin volume sizes (<1.56 TiB) exceeds the size of thin pools and the size of whole volume group (469.45 GiB).
--+--
--+--
--+--
no current metadata snap
Usage: thin_delta [options] <device or file>
Options:
{--thin1, --snap1}
{--thin2, --snap2}
{-m, --metadata-snap} [block#]
{--verbose}
{-h|--help}
{-V|--version}
--+--
--+--
The 'WARNING' messages are actually normal for LVM. That just means if you tried to fill up all the space in all your VMs then LVM wouldn't have enough space for them all. So you can ignore them.
But 'no current metadata snap' is the real problem. Its like the dmsetup
command isn't getting executed. Are you running Wyng in dom0?
yes in dom0 and as root. I tested
dmsetup message /dev/mapper/qubes_dom0-root--pool-tpool 0 reserve_metadata_snap
dmsetup message /dev/mapper/qubes_dom0-root--pool-tpool 0 release_metadata_snap
and that works. Same with " vm-pool ".
I did add "root" in the list ov VM's to update, simply to have a copy of my dom0 root. But that one lives in a different pool (root-pool vs vm-pool). Is that which causes problems ??
@eric-blabla The root volume may be triggering the problem, but its not supposed to. Wyng allows you to add volumes from different pools, as long as those pools are in the same vg. The pool name you see in the settings serves as the default for receiving a volume, so it doesn't matter here.
Thanks for trying those dmsetup commands. Could you try to do the same for '/dev/mapper/qubes_dom0-vm--pool-tpool'?
I'll do some testing with multiple pools to see if I can reproduce the error.
dmsetup message /dev/mapper/qubes_dom0-vm--pool-tpool 0 reserve_metadata_snap
dmsetup message /dev/mapper/qubes_dom0-vm--pool-tpool 0 release_metadata_snap
works fine as well. It might be useful to launch the dmsetup commands with a verbose flag and produce more output?
I tried to include in the command line
do_exec([[CP.dmsetup,"message", vgname+"-"+poolname+"-tpool",
"0", action+"_metadata_snap"]], check= action=="reserve")
some " -v -v " flags, but I seem to make erros, the command does not launch as it should
@eric-blabla I reproduced the error on my system, which appears to be a stale meta snapshot that was never released. Today's update should fix the problem.
It did! Thank you. Small remark: "root" seems to be sent each time, even without changes made. Maybe that is since "wyng" lies in its reals and changes some small files ? :)
root is <3M so not issue, just a remark.
@eric-blabla The dom0 root is special because Qubes doesn't take snapshots of it. This means you are backing it up "hot", and the tiny changes that occur minute by minute are always noticed by Wyng. If its a real issue for you, there are ways around it such as making your own manual snapshots of root and renaming "root" in the archive to the name you chose for your root snapshot. Or you could specify --volex root
to exclude root most of the time.
I am playing with wyng 0.4alpha3. I successfully took a snapshot of all VM's inside qubes.
the "verify" operations suggests all is fine:
However, updating the snapshot fails: