kubernetes / cloud-provider-openstack

Apache License 2.0
621 stars 611 forks source link

Helm chart out of date, for cinder-csi ? #2116

Closed senare closed 9 months ago

senare commented 1 year ago

Is this a BUG REPORT or FEATURE REQUEST?:

/kind feature

What happened: I can only get PVC to work using the manifest not the helm release

I have not pinpointed the issue bit seams to have been a few changes merged since last helm release

name = "cinder-csi" chart = "openstack-cinder-csi" version = "2.3.0"

Would it be possible to have a new release with newer / latest dependencies ?

What you expected to happen:

I would like to use helm to manage install, update etc ...

How to reproduce it:

Setup cluster, I am on openstack installing Talos

details => https://github.com/senare/talos-k8s-terraform-openstack

and then I apply

resource "helm_release" "openstack_cinder_csi" {
  name = "cinder-csi"
  chart = "openstack-cinder-csi"
  version = "2.3.0"

  namespace = "kube-system"
  repository = "https://kubernetes.github.io/cloud-provider-openstack"

  set {
    name  = "secret.enabled"
    value = "true"
  }

  set {
    name  = "secret.name"
    value = "cloud-config"
  }

  set {
    name  = "storageClass.delete.isDefault"
    value = "true"
  }
  values = [
    templatefile("./config/csi-values.yaml",
      {
        auth_url = "https://${lower(data.openstack_identity_auth_scope_v3.scope.region)}.citycloud.com:5000/v3/",
        username = data.openstack_identity_auth_scope_v3.scope.user_name,
        password = var.os_password,
        tenant_name = data.openstack_identity_auth_scope_v3.scope.project_name,
        domain_name = data.openstack_identity_auth_scope_v3.scope.project_domain_name,
        subnet_name = data.openstack_networking_network_v2.external-network.name,
        cluster_id = var.prefix
      })
  ]
}
zetaab commented 1 year ago

please check the latest versions is this solved?

senare commented 1 year ago

I am not really sure what you are asking me to check ?

I am using this ( as a workaround ) i.e deploy latest using manifest / kustomize which seams to work for my needs (i.e. PVC => cinder )

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/cinder-csi-plugin/cinder-csi-controllerplugin.yaml
  - https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/cinder-csi-plugin/cinder-csi-controllerplugin-rbac.yaml
  - https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/cinder-csi-plugin/cinder-csi-nodeplugin.yaml
  - https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/cinder-csi-plugin/cinder-csi-nodeplugin-rbac.yaml
  - https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/cinder-csi-plugin/csi-cinder-driver.yaml

But I would much prefere if I was able to lean on a helm chart with a new release

jichenjc commented 1 year ago

@zetaab I think we may need a new helm chart release that can reflect latest change recently added ..

senare commented 1 year ago

How can I help ?

I am not sure what needs to be done in order for there to be a new helm chart release, but if provided with some guidance I would be willing to give it a try ?

jichenjc commented 1 year ago

I don't remember correctly ,seems from https://github.com/kubernetes/cloud-provider-openstack/commit/34d1d5b2677b7d74e5a45fbc69cb27f76877bd39

there should be CI somewhere that if we update the chart version it should trigger the release helm chart somewhere .. maybe we can consider a new tag to try it

senare commented 1 year ago

Please do! If successfully i can try and verify if at-least my problem gets solved

jichenjc commented 1 year ago

Not sure If I understand the current release logic right

# helm search openstack
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
cpo/openstack-cinder-csi                2.3.0           v1.25.0         Cinder CSI Chart for OpenStack
cpo/openstack-cloud-controller-manager  1.4.0           v1.25.0         Openstack Cloud Controller Manager Helm Chart
cpo/openstack-manila-csi                1.6.0           v1.25.0         Manila CSI Chart for OpenStack

the chart only release on release branch https://github.com/kubernetes/cloud-provider-openstack/blob/master/.github/workflows/release.yaml#L6

so you are getting 2.3.0 CSI from release-1.26 with that, if you have special requirement ,we need backport the code/chart update then you should be able to see the new chart

Lirt commented 1 year ago

Hi,

I see something possibly related. I had a PR which was merged https://github.com/kubernetes/cloud-provider-openstack/pull/2105/files. Cinder-csi chart version was changed to 2.3.1.

This one was never released, which would be fine, but none of the newer released versions 2.24.0, 2.25.0, 2.26.0 contain the change I made in the PR which is odd.

Would it be possible to release patch version that rebase previous work correctly?

If I understand correctly, those releases are done as compatibility strategy to match k8s release to cinder-csi release. I think they should contain previous work done in master otherwise it will be pretty hard to understand what is being released.

In the 2.27.0-alpha release I can finally see my change.

Overall with this kind of release strategy you most likely only need to maintain correct (compatible) versions of docker images in values.yaml and rest of the branch can be kept in line with master.

jichenjc commented 1 year ago

https://github.com/kubernetes/cloud-provider-openstack/issues/2175 has some discussion around that

I will close this issue if everyone agree that issue and related PR fixed the problem..

Lirt commented 1 year ago

Hmmm, here is my point of view.

If this is currently supported matrix for helm charts in this repository:

cinder-csi-plugin

branch current version new version
master 2.3.2 2.27.0-alpha.0
release-1.26 2.3.0 2.26.0
release-1.25 2.3.0 2.25.0
release-1.24 2.2.1 2.24.0

Then you should backport latest changes in helm charts to all release lines.

Here are 2 examples of commits/changes that are not available in charts with tags 2.24, 2.25, 2.26 (I assume anything after those commits is not included):

The release policy is fine for me. If you decide to drop support for certain release I'm OK. But if the release line is supported, there should be all the work included in those releases.

So for now releasing patch versions 2.24.1 2.25.1 and 2.26.1 with all missing helm chart changes since that release would fix the situation.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 10 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 9 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 9 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/cloud-provider-openstack/issues/2116#issuecomment-1951385036): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.