kubernetes / kubernetes

Production-Grade Container Scheduling and Management
https://kubernetes.io
Apache License 2.0
110.37k stars 39.47k forks source link

Allow adding more volumeClaimTemplates to an existing statefulSet #69041

Open raravena80 opened 6 years ago

raravena80 commented 6 years ago

/kind feature /sig storage

What happened:

Currently, you get this if you want to update an existing statefulSet with a new volumeClaimTemplate:

Error: UPGRADE FAILED: StatefulSet.apps "my-app" is invalid: spec: Forbidden: updates to statefulSet spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden.

What you expected to happen:

Allow the creation of volumeClaimTemplates in a statefulSet.

How to reproduce it (as minimally and precisely as possible):

For example:

    volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 100Gi
 +   - metadata:
 +       name: data2
 +     spec:
 +       accessModes:
 +       - ReadWriteOnce
 +       resources:
 +         requests:
 +           storage: 100Gi

Anything else we need to know?:

Some background here

Environment:

kasisnu commented 6 years ago

Hi. I think I understand the problem. Can I volunteer to work on a fix? I'm digging around but if anyone has obvious pointers for me, that would help.

New to the k8s codebase

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

raravena80 commented 5 years ago

/remove-lifecycle rotten /remove-lifecycle stale

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fabiocorneti commented 5 years ago

/remove-lifecycle stale

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

raravena80 commented 5 years ago

/remove-lifecycle stale

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

raravena80 commented 4 years ago

/remove-lifecycle stale

raravena80 commented 4 years ago

Hi, @kasisnu did you get a chance to work on this? Thx!

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

raravena80 commented 4 years ago

/remove-lifecycle stale

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

raravena80 commented 4 years ago

/remove-lifecycle stale

JasonRD commented 4 years ago

Is there any reason for this design?

mkjpryor-stfc commented 4 years ago

The inability to add additional claims is a real PITA...! Hopefully this doesn't get ignored forever...

gigglegrig commented 4 years ago

shocked to see this is still not touched after a year.

allenhaozi commented 4 years ago

at least can support update statefulset with replicas is 0

guillaumefenollar commented 3 years ago

A workaround I'm using is to copy the content of STS object (for instance in yaml format), delete the original sts with --cascade=false so that pods are not stopped and recreate it after adding the volumeclaimtemplates to its spec. PVC and pods are recreated. It's a better solution than setting the replicas option to 0 in case of clustered app (kafka, elasticsearch, mongodb...).

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

raravena80 commented 3 years ago

/remove-lifecycle stale

blorby commented 3 years ago

we'll be happy to have this ability!

dav3ydoo commented 3 years ago

I am currently working on a fix for this. It's maybe 70% complete now. I am new to the contribution community here so there is a little bit of a learning curve in relation to contributing.

I don't have the option to assign this to myself. Can an admin assign it to me or add me as a collaborator on the repo/project? :)

raravena80 commented 3 years ago

/assign dav3ydoo

giovannirco commented 3 years ago

How is it going @dav3ydoo ? Did you have any progress on this?

I would love to have this working as well

dav3ydoo commented 3 years ago

Hi @giovannirco , I did not make any progress. Well just enough progress to determine we weren't going to invest the time to implement this. After looking into this there are some nasty edge cases that required significant thought, work, and collaboration with the storage SIG. Something that wasn't worthwhile for our use case unfortunately

slashpai commented 3 years ago

This issue happens while updating existing volumeClaimTemplate as well. If we give a wrong storageClassName in volumeClaimTemplates section while creating stateful set and later update it to correct one its not recreating/updating PVC with correct one. To make it work have to delete PVC already created and recreate stateful set as well

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

kasimon commented 3 years ago

/remove-lifecycle rotten

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

july2993 commented 2 years ago

A workaround I'm using is to copy the content of STS object (for instance in yaml format), delete the original sts with --cascade=false so that pods are not stopped and recreate it after adding the volumeclaimtemplates to its spec. PVC and pods are recreated. It's a better solution than setting the replicas option to 0 in case of clustered app (kafka, elasticsearch, mongodb...).

this do not work since it will update the pod but fail to update after adding a named 'www2' in volumeclaimtemplates and recreate the STS

  ----     ------        ----  ----                    -------
  Warning  FailedUpdate  54s   statefulset-controller  update Pod web-0 in StatefulSet web failed error: Pod "web-0" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
  core.PodSpec{
    Volumes: []core.Volume{
-     {
-       Name: "www2",
-       VolumeSource: core.VolumeSource{
-         PersistentVolumeClaim: &core.PersistentVolumeClaimVolumeSource{ClaimName: "www2-web-0"},
-       },
-     },
      {Name: "www", VolumeSource: {PersistentVolumeClaim: &{ClaimName: "www-web-0"}}},
      {Name: "kube-api-access-hmfv7", VolumeSource: {Projected: &{Sources: {{ServiceAccountToken: &{ExpirationSeconds: 3607, Path: "token"}}, {ConfigMap: &{LocalObjectReference: {Name: "kube-root-ca.crt"}, Items: {{Key: "ca.crt", Path: "ca.crt"}}}}, {DownwardAPI: &{Items: {{Path: "namespace", FieldRef: &{APIVersion: "v1", FieldPath: "metadata.namespace"}}}}}}, DefaultMode: &420}}},
    },
k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

charannaik commented 2 years ago

/remove-lifecycle rotten /remove-lifecycle stale

Olli73773 commented 2 years ago

Hi, is someone working on the issue? I do not actually add a new volumeClaimTemplates but I need to resize a existing one. Resizing it self is not a problem but the statefull set is autoscaling enabled and if a new PVC is create it has the wrong size. I use helm for the deployment and the only way to fix the problem is to uninstall the chart and redeploy with the new setting. At least a way to edit the existing volumeClaimTemplates would solve my problem.

Encounter77 commented 2 years ago

Hi, is someone working on the issue?

romangallego commented 2 years ago

Any update on when the feature could be ready for test?

maxb commented 2 years ago

I've just found this issue after encountering the error message, and would like to ask:

Does anyone understand WHY this restriction is in place? Adding new volumeClaimTemplates is not the only issue - it would be nice to be able to update the sizes of existing ones. I don't expect this to resize existing volumes, but it should be possible to specify a new size for future scaling up.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

invidian commented 2 years ago

/remove-lifecycle stale

SStorm commented 1 year ago

Up. Could someone at least explain why this restriction is in place? If your StatefulSet is configured to auto-scale based on some criteria and you need to change the size of the PVCs, it is very annoying.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

maxb commented 1 year ago

/remove-lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

invidian commented 1 year ago

/remove-lifecycle stale

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

southz commented 8 months ago

/remove-lifecycle stale

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

southz commented 5 months ago

/remove-lifecycle stale