Open aivanise opened 1 year ago
Please show "lxc config show (instance) --expanded" before and after restore.
Reproduced the issue.
This will be because the storage volume's idmap settings (when not using idmapped mounts from the kernel) are stored in the storage volume's DB record, e.g.
lxc storage volume show default foo
config:
volatile.idmap.last: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
description: ""
name: foo
type: custom
And when the DB record, it cannot be recovered by lxd recover
because there is no equivalent of the instance's backup.yaml for custom volumes. Thus when it is next used it gets shifted again.
What do you think @stgraber is this a "wont fix" or do we need to add a backup of the volume's config somewhere on disk aside from in the database?
It'd be good to have something for sure, but I'm not sure what can be done given that unlike instances, we don't really have a place for a backup.yaml type file on custom volumes...
What may push me towards "won't fix" is that VFS idmap is starting to be pretty ubiquitous, so the need for id remapping of instances and volumes is going to go down very rapidly over the next 6 months or so.
Idmapped mounts for the win!
Required information
config: cluster.https_address: lxd11:8443 core.https_address: lxd11:8443 api_extensions:
Issue description
attempted to lxd recover one container and attached storage volume, it went OK, but the uids/gids on the storage volume were wrong:
[root@vaulttest ~]# ls -al /testvol/ total 10 drwx--x--x 2 1000000 1000000 3 May 22 17:47 . drwxr-xr-x 19 root root 24 May 22 16:33 .. -rw-r--r-- 1 1000000 1000000 10 May 22 16:34 aa
Steps to reproduce
lxc launch images:ubuntu vaulttest lxc storage volume create default testvol lxc storage volume attach default testvol vaulttest /testvol
copy the containers away
zfs send mypool/containers/vaulttest | zfs receive mypool/temp/vaulttest zfs send mypool/custom/default_testvol | zfs receive mypool/temp/default_testvol
lxc rm -f vaulttest lxc storage volume delete default testvol
zfs rename mypool/temp/vaulttest mypool/containers/vaulttest zfs rename mypool/temp/default_testvol mypool/custom/default_testvol lxd recover
lxc exec vaulttest -- bash [root@vaulttest ~]# ls -al /testvol/ you will see the output above, i.e. files have an uid of 100000 instead of 0