Corsinvest / cv4pve-autosnap

Automatic snapshot tool for Proxmox VE
https://www.corsinvest.it/cv4pve
GNU General Public License v3.0
413 stars 51 forks source link

Process all VMs from a pool #52

Closed michabbs closed 2 years ago

michabbs commented 3 years ago

It would be good to process all VMs from a specified pool. For example something like this:

cv4pve-autosnap --vmid="@Poolname"

franklupo commented 3 years ago

Hi, I was thinking of adding in the vmid parameter the ability to specify the pool name using the 'pool-' prefix. What do you think?

Best regards

michabbs commented 3 years ago

Well, actually it is possible to have a VM named "pool-1". So such prefix seems not to be a good idea. "@" as prefix looks much nicer and is not ambiguous. :-)

franklupo commented 3 years ago

Hi, actually, if you read the documentation, prefixes are already used.

 --vmid              The id or name VM/CT comma separated (eg. 100,101,102,TestDebian)
                      -vmid or -name exclude (e.g. -200,-TestUbuntu)
                      'all-???' for all VM/CT in specific host (e.g. all-pve1,   all-\$(hostname)),
                      'all' for all VM/CT in cluster
franklupo commented 3 years ago

For compatibility maintaining old format and introduce new format:

'@pool-???' for all VM/CT in specific pool (e.g. @pool-customer1),
'@all-???' for all VM/CT in specific host (e.g. @all-pve1, @all-\$(hostname)),
'@all' for all VM/CT in cluster
michabbs commented 3 years ago

Look nice. :-) ...and what about "all vm's in a given pool on a specific node"? :-) :-) :-)

franklupo commented 3 years ago

The pool is unique in the cluster. Is not necessary to specify host

michabbs commented 3 years ago

...but node is not unique to the pool. You might want to do backup on one node only. (All vm's on that node in a particular pool.)

Actually this is reasonable. If you run cv4pve-autosnap in a cron on one node - everything goes fine until that node fails. Then your snapshots are not being created anymore, even through another nodes are still up. It is better idea to run cv4pve-autosnap separately on every node - so that every node takes care of its own VMs only. So you need to "snapshot all vm's in a given pool, on a specific node only".

franklupo commented 3 years ago

It is not necessary to install cv4pve-autosnap internally of node but externally this use API. The VMs in a cluster are as unique as the pools, even if the node dies everything works. You must specify in the --host parameter to use all the nodes of the cluster in the form "host[: port],host1[:port],host2[:port]"

Installation outside the cluster is preferred.

michabbs commented 3 years ago

Good point. :-) But... anyway it's still possible to install and use cv4pve-autosnap directly on a node. And I sure people do it. And when they do... they might want to separate processing "by node".

franklupo commented 3 years ago

when you execute cv4pve-autosnap it is not important which node you run it on because it will snap the vms that are specified in the -vmid parameter, as the vms are in the custer (even if only one node). What you want is perhaps something different. Give me some examples.

Best regards

michabbs commented 3 years ago

Execution is successful as long as it is executed at all.

Imagine: there are 2 nodes (node1, node2). cv4pve-autosnap is installed on node1 and automatically creates snapshots of all vms in a pool. The snapshots are created on all nodes. Everything works.

Now: Node1 goes down. Node2 still work, but snapshots are not created anymore.

Solution: Install cv4pve-autosnap on all nodes, each of them to create snapshots on its own node only. This way snapshots on node2 are not affected by failure of node1.

franklupo commented 3 years ago

installing cv4pve-autosnap outside the cluster the problem does not exist. However, if in HA the vm are moved from one node to another what happens? That you would no longer snap.

michabbs commented 3 years ago

installing cv4pve-autosnap outside the cluster the problem does not exist.

Yes, but that requires "the outside" not to fail. So we come back to the initial problem: one failure stops snapshots in whole cluster.

However, if in HA the vm are moved from one node to another what happens? That you would no longer snap.

Why? After migration the vm will stay in the same pool, so no problem. Snapshots will be automatically made on the new node. (If the node make snapshots of "all my own vm's in the pool", then the newly migrated vm will be also snapshotted. And this is clou of the idea.)

franklupo commented 2 years ago

In the latest version you can specify the pool using: '@pool-???' for all VM/CT in specific pool (e.g. @pool-customer1),

Best regards