Open flostru opened 2 years ago
Is vsphereveleroplugin/backup-driver:v1.4.0
equals to vsphereveleroplugin/velero-plugin-for-vsphere:v1.4.0
?
Is
vsphereveleroplugin/backup-driver:v1.4.0
equals tovsphereveleroplugin/velero-plugin-for-vsphere:v1.4.0
?
Actually while sidecar initializes it creates a deployment called backup-driver
with image vsphereveleroplugin/backup-driver:v1.4.0
wich fails and gives above error log since it will need a Velero Backup Storage Location but that isnt created yet since initial velero deployment isnt done yet.
Classical bootstrap problem.
And no vsphereveleroplugin/backup-driver:v1.4.0
is different from vsphereveleroplugin/velero-plugin-for-vsphere:v1.4.0
According to the documentation for "velero-plugin-for-vsphere" https://github.com/vmware-tanzu/velero-plugin-for-vsphere/blob/main/docs/vanilla.md vsphereveleroplugin/velero-plugin-for-vsphere:v1.3.0
is the correct image. But i use later 1.4.0 but 1.3.0 has the same behaviour.
@flostru The helm chart can't guarantee the ordering, and the velero-plugin-for-vsphere requires the BSL is available. So, I think it's the velero-plugin-for-vsphere design.
Is there any way the velero-plugin-for-vsphere could skip checking if the BSL is available?
@flostru @jenting
Did you reach a resolution to this issue?
I am having seeing the same issue noted and seeing the same errors.
the init container spins up with the following image... velero-plugin-for-vsphere:v1.4.2
and then creates a new backup-driver deployment with the following image image - backup-driver:v1.4.2 and then init container also creates a new daemonset.apps/datamgr-for-vsphere-plugin with the following image - data-manager-for-plugin:v1.4.2
both the backup driver deployment and the daemonset.apps/datamgr-for-vsphere-plugin fail with the error messages noted above.
@JuanGarcia01 actually we have implemented a workaround by removing the initcontainer and triggering a kubernetes Job that installs the plugin after the helm installation is done.
apiVersion: batch/v1
kind: Job
metadata:
name: install-velero-plugin-for-vsphere
namespace: velero
spec:
backoffLimit: 4
template:
spec:
automountServiceAccountToken: true
containers:
- args:
- '-n'
- velero
- plugin
- add
- vsphereveleroplugin/velero-plugin-for-vsphere:v1.4.2
command:
- /velero
image: velero/velero:v1.10.1
imagePullPolicy: IfNotPresent
name: install-velero-plugin-for-vsphere
restartPolicy: Never
serviceAccountName: velero
Easy since we use terraform
Here the part of our tf module that does this.
resource "kubernetes_job" "velero_plugin_for_vsphere" {
metadata {
name = "install-velero-plugin-for-vsphere"
namespace = var.namespace
}
spec {
template {
metadata {}
spec {
container {
name = "install-velero-plugin-for-vsphere"
image = "velero/velero:${var.velero_cli_version}"
command = ["/velero"]
args = [
"-n",
"${var.namespace}",
"plugin",
"add",
"vsphereveleroplugin/velero-plugin-for-vsphere:${var.velero_plugin_for_vsphere_version}",
]
}
restart_policy = "Never"
service_account_name = "velero"
}
}
backoff_limit = 4
}
}
What steps did you take and what happened:
Since plugin installation (vsphere) happens as init-container the first install of the chart fails because no backupstoragelocation is created yet.
Reproduce:
helm install with aws and vsphere plugin.
values.yaml:
What did you expect to happen:
Successfull initialization
The output of the following commands will help us better understand what's going on:
Output of initcontainer vsphereveleroplugin/velero-plugin-for-vsphere:v1.4.0
Output of
kubectl logs deploy/backup-driver -n velero
Anything else you would like to add:
If i add the vsphere init container later, after installing the helm chart once with only the aws plugin, it works fine since in a later step (after init containers) the default backupstoragelocation gets created and i can add the vsphere plugin.
So it looks like actually we need a 2 step installation of the helm chart.
This is odd for automated bootstrapping.
Environment:
/etc/os-release
): debian bullseye