openshift-kni / lifecycle-agent

Local agent for orchestration of SNO Image Based Upgrade
Apache License 2.0
6 stars 26 forks source link

Bump github.com/vmware-tanzu/velero from 1.13.2 to 1.14.0 #578

Closed dependabot[bot] closed 1 week ago

dependabot[bot] commented 2 weeks ago

Bumps github.com/vmware-tanzu/velero from 1.13.2 to 1.14.0.

Release notes

Sourced from github.com/vmware-tanzu/velero's releases.

v1.14.0

v1.14

Download

https://github.com/vmware-tanzu/velero/releases/tag/v1.14.0

Container Image

velero/velero:v1.14.0

Documentation

https://velero.io/docs/v1.14/

Upgrading

https://velero.io/docs/v1.14/upgrade-to-1.14/

Highlights

The maintenance work for kopia/restic backup repositories is run in jobs

Since velero started using kopia as the approach for filesystem-level backup/restore, we've noticed an issue when velero connects to the kopia backup repositories and performs maintenance, it sometimes consumes excessive memory that can cause the velero pod to get OOM Killed. To mitigate this issue, the maintenance work will be moved out of velero pod to a separate kubernetes job, and the user will be able to specify the resource request in "velero install".

Volume Policies are extended to support more actions to handle volumes

In an earlier release, a flexible volume policy was introduced to skip certain volumes from a backup. In v1.14 we've made enhancement to this policy to allow the user to set how the volumes should be backed up. The user will be able to set "fs-backup" or "snapshot" as value of “action" in the policy and velero will backup the volumes accordingly. This enhancement allows the user to achieve a fine-grained control like "opt-in/out" without having to update the target workload. For more details please refer to https://velero.io/docs/v1.14/resource-filtering/#supported-volumepolicy-actions

Node Selection for Data Movement Backup

In velero the data movement flow relies on datamover pods, and these pods may take substantial resources and keep running for a long time. In v1.14, the user will be able to create a configmap to define the eligible nodes on which the datamover pods are launched. For more details refer to https://velero.io/docs/v1.14/data-movement-backup-node-selection/

VolumeInfo metadata for restored volumes

In v1.13, we introduced volumeinfo metadata for backup to help velero CLI and downstream adopter understand how velero handles each volume during backup. In v1.14, similar metadata will be persisted for each restore. velero CLI is also updated to bring more info in the output of "velero restore describe".

"Finalizing" phase is introduced to restores

The "Finalizing" phase is added to the state transition flow to restore, which helps us fix several issues: The labels added to PVs will be restored after the data in the PV is restored via volumesnapshotter. The post restore hook will be executed after datamovement is finished.

Certificate-based authentication support for Azure

Besides the service principal with secret(password)-based authentication, Velero introduces the new support for service principal with certificate-based authentication in v1.14.0. This approach enables you to adopt a phishing resistant authentication by using conditional access policies, which better protects Azure resources and is the recommended way by Azure.

Runtime and dependencies

  • Golang runtime: v1.22.2
  • kopia: v0.17.0

Limitations/Known issues

  • For the external BackupItemAction plugins that take snapshots for PVs, such as vsphere plugin. If the plugin checks the value of the field "snapshotVolumes" in the backup spec as a criteria for snapshot, the settings in the volume policy will not take effect. For example, if the "snapshotVolumes" is set to False in the backup spec, but a volume meets the condition in the volume policy for "snapshot" action, because the plugin will not check the settings in the volume policy, the plugin will not take snapshot for the volume. For more details please refer to #7818

Breaking changes

  • CSI plugin has been merged into velero repo in v1.14 release. It will be installed by default as an internal plugin, and should not be installed via "–plugins " parameter in "velero install" command.
  • The default resource requests and limitations for node agent are removed in v1.14, to make the node agent pods have the QoS class of "BestEffort", more details please refer to #7391
  • There's a change in namespace filtering behavior during backup: In v1.14, when the includedNamespaces/excludedNamespaces fields are not set and the labelSelector/OrLabelSelectors are set in the backup spec, the backup will only include the namespaces which contain the resources that match the label selectors, while in previous releases all namespaces will be included in the backup with such settings. More details refer to #7105
  • Patching the PV in the "Finalizing" state may cause the restore to be in "PartiallyFailed" state when the PV is blocked in "Pending" state, while in the previous release the restore may end up being in "Complete" state. For more details refer to #7866

All Changes

... (truncated)

Commits
  • 2fc6300 Merge pull request #7860 from blackpiglet/update_e2e_for_1_14
  • 200f16e Merge branch 'release-1.14' into update_e2e_for_1_14
  • 0d36572 Merge pull request #7876 from reasonerjt/update-release-note-1.14
  • 08fea6e Merge branch 'release-1.14' into update-release-note-1.14
  • d20bd16 Skip parallel files upload and download test for Restic case.
  • bf778c7 Merge pull request #7875 from reasonerjt/fix-restore-crash-1.14
  • a650059 Update release note of 1.14
  • f61c8b9 Add checks for csisnapshot for vol_info population
  • 2136679 Merge pull request #7852 from reasonerjt/fix-7849-1.14
  • f6367ca Use PVC to track the CSI snapshot in restore
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
openshift-ci[bot] commented 2 weeks ago

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign donpenney for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files: - **[OWNERS](https://github.com/openshift-kni/lifecycle-agent/blob/main/OWNERS)** Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment
openshift-ci[bot] commented 2 weeks ago

Hi @dependabot[bot]. Thanks for your PR.

I'm waiting for a openshift-kni member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
dependabot[bot] commented 1 week ago

OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting @dependabot ignore this major version or @dependabot ignore this minor version. You can also ignore all major, minor, or patch releases for a dependency by adding an ignore condition with the desired update_types to your config file.

If you change your mind, just re-open this PR and I'll resolve any conflicts on it.