kubevirt / hyperconverged-cluster-operator

Operator pattern for managing multi-operator products
Apache License 2.0
145 stars 148 forks source link

[release-1.10] Backport CDI 1.58.0 into 1.10 operator release #2727

Closed GingerGeek closed 6 months ago

GingerGeek commented 6 months ago

Is this a BUG REPORT or FEATURE REQUEST?: Feature Request

/kind enhancement

What happened: Current operator release 1.10 includes version 1.57.0 of CDI and related packages

What you expected to happen: Ideally would like 1.58.0 of CDI so we have default storage class specifically for virtualization (kubevirt/containerized-data-importer#2913)

How to reproduce it (as minimally and precisely as possible): N/A

Anything else we need to know?:

2670 pulled in the CDI bump via a bot in this repo onto the main branch, is it possible to ask the bot to update the release branch?

Not sure if a backport such as this is something you would usually do?

Environment:

tiraboschi commented 6 months ago

hi @GingerGeek, is there any specific reason for this request? As a general rule we try to keep parallel y-streams of the sibling projects. So on HCO v1.10.z we will automatically bump v1.0.z releases of kubevirt/kubevirt, v1.57.z of CDI, v0.89.z of CNAO and so on. We expect to consume CDI v1.58.z releases as part of HCO v1.11.z stream.

GingerGeek commented 6 months ago

Hi,

Thanks for the explanation. My main motivation was to pull in kubervirt/containerized-data-importer#2913 that allows you to tag a storage class as the default for VMs.

My default storage class within the cluster was a standard ODF-like volume, so our VMs gave warnings related to VirtualMachineCRCErrors. My solution currently is to switch the default cluster volume to a VM Storage class which contains "krbd:rxbounce" map option.

My preference would be not to have to modify cluster wide defaults, and indeed prior to installing HCO I didn't even have a default StorageClass within the cluster.

Totally understand it's your usual process to not pull in an upstream update, so I'll close this for now and await the next release!