Open simondeziel opened 6 months ago
For the record, I worked around this issue with this hack:
for ctn in apt ... log mail metrics pm rproxy smb squid weechat; do
last_snap="$(lxc info "${ctn}" | grep -owE 'snap[0-9]+' | tail -n1)"
lxc restore "${ctn}" "${last_snap}"
last_state_idmap="$(lxc config get "${ctn}" volatile.last_state.idmap)"
last_idmap_base="$(echo "${last_state_idmap}" | sed 's/.*"Hostid":\([0-9]\+\),.*/\1/')"
lxc config set "${ctn}" volatile.idmap.base "${last_state_base}"
lxc config set "${ctn}" volatile.idmap.next "${last_state_idmap}"
lxc start "${ctn}"
done
I just went through a
lxd recover
after an accidentalapt-get autopurge -y snapd
(don't ask or maybe around a beer). This nuked LXD's DB but (fortunately) left the zpool intact forlxd recover
to recover all the data.On this server, there was this
ganymede
container configured withsecurity.idmap.isolated=true
and presumablyvolatile.idmap.base=1065536
. This container has many volumes attached to it so idmap details need to be right otherwise those won't be remapped and will be inaccessible.After a successful
lxd recover
, here's what the config of the last snapshot for that container looks like:In there, we see that both
volatile.idmap.current
andvolatile.last_state.idmap
use ahostid
of1065536
. We also see thatvolatile.idmap.next
uses a differenthostid
of1131072
, presumably due tovolatile.idmap.base
being set to this value.This will cause the container to go through an ID remapping which could have been avoided had
volatile.idmap.next
been set identically tovolatile.idmap.current
.In otherwords, why is the
volatile.idmap.base
changed during recovery?Additional information: