Open mkretzer opened 6 years ago
Hi Markus,
This is a known limitation currently, see here https://github.com/vmware/kubernetes/issues/302. There's several approaches (workarounds) how to tackle that currently and in the future. But as of today, traditional backup solutions won't work in Kubernetes environments.
cc @tusharnt and storage engineering here.
Please also feel free to reach out to SIG VMware on Slack (https://github.com/kubernetes/community/tree/master/sig-vmware) to further discuss this.
Can someone explain why non-independent disk is not even an option? We have an application which does not host that much data and where pods are not rescheduled/created often. Even if with every backup our backup soulution would have to re-read all data that would be fine for us.
@mkretzer Due to the way independent disks are currently handled in vSphere, all major backup providers do not support taking backups of independent disks, mainly due to missing snapshot support IIRC.
Currently you have to work around that until the in v6.5 introduced vSphere first class disk (FCD)
will be fully supported by all backup providers. Workarounds could be:
[1] https://storagehub.vmware.com/t/site-recovery-manager-3/vsphere-replication-faq/
That was not the question. Why is it not possible to attach the disk non-independent? That would be the best workaround. Sure, it would have downsides as well (for example CBT will not work after re-attachment) but it would be woth it!
Hello,
we need Non independent disks mapped with vSphere Storage for Kubernetes. Right now when we map a volume claim to a pod a independed disk gets created.
This means we cannot backup this disk with Veeam.
Since we create pods not on a daily basis our pods remain quite static for a long time and we can afford to loose incremental backup when re-schedulung happens.
How can we implement this? Vmware support tries to help us right now but currently they have no one who is qualified for this product (Ticket 18822315306).
Markus