Open raravena80 opened 6 years ago
Hi. I think I understand the problem. Can I volunteer to work on a fix? I'm digging around but if anyone has obvious pointers for me, that would help.
New to the k8s codebase
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten /remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Hi, @kasisnu did you get a chance to work on this? Thx!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Is there any reason for this design?
The inability to add additional claims is a real PITA...! Hopefully this doesn't get ignored forever...
shocked to see this is still not touched after a year.
at least can support update statefulset with replicas is 0
A workaround I'm using is to copy the content of STS object (for instance in yaml format), delete the original sts with --cascade=false
so that pods are not stopped and recreate it after adding the volumeclaimtemplates to its spec. PVC and pods are recreated. It's a better solution than setting the replicas option to 0 in case of clustered app (kafka, elasticsearch, mongodb...).
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
we'll be happy to have this ability!
I am currently working on a fix for this. It's maybe 70% complete now. I am new to the contribution community here so there is a little bit of a learning curve in relation to contributing.
I don't have the option to assign this to myself. Can an admin assign it to me or add me as a collaborator on the repo/project? :)
/assign dav3ydoo
How is it going @dav3ydoo ? Did you have any progress on this?
I would love to have this working as well
Hi @giovannirco , I did not make any progress. Well just enough progress to determine we weren't going to invest the time to implement this. After looking into this there are some nasty edge cases that required significant thought, work, and collaboration with the storage SIG. Something that wasn't worthwhile for our use case unfortunately
This issue happens while updating existing volumeClaimTemplate as well. If we give a wrong storageClassName
in volumeClaimTemplates
section while creating stateful set and later update it to correct one its not recreating/updating PVC with correct one. To make it work have to delete PVC already created and recreate stateful set as well
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
A workaround I'm using is to copy the content of STS object (for instance in yaml format), delete the original sts with
--cascade=false
so that pods are not stopped and recreate it after adding the volumeclaimtemplates to its spec. PVC and pods are recreated. It's a better solution than setting the replicas option to 0 in case of clustered app (kafka, elasticsearch, mongodb...).
this do not work since it will update the pod but fail to update after adding a named 'www2' in volumeclaimtemplates and recreate the STS
---- ------ ---- ---- -------
Warning FailedUpdate 54s statefulset-controller update Pod web-0 in StatefulSet web failed error: Pod "web-0" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
core.PodSpec{
Volumes: []core.Volume{
- {
- Name: "www2",
- VolumeSource: core.VolumeSource{
- PersistentVolumeClaim: &core.PersistentVolumeClaimVolumeSource{ClaimName: "www2-web-0"},
- },
- },
{Name: "www", VolumeSource: {PersistentVolumeClaim: &{ClaimName: "www-web-0"}}},
{Name: "kube-api-access-hmfv7", VolumeSource: {Projected: &{Sources: {{ServiceAccountToken: &{ExpirationSeconds: 3607, Path: "token"}}, {ConfigMap: &{LocalObjectReference: {Name: "kube-root-ca.crt"}, Items: {{Key: "ca.crt", Path: "ca.crt"}}}}, {DownwardAPI: &{Items: {{Path: "namespace", FieldRef: &{APIVersion: "v1", FieldPath: "metadata.namespace"}}}}}}, DefaultMode: &420}}},
},
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten /remove-lifecycle stale
Hi, is someone working on the issue? I do not actually add a new volumeClaimTemplates but I need to resize a existing one. Resizing it self is not a problem but the statefull set is autoscaling enabled and if a new PVC is create it has the wrong size. I use helm for the deployment and the only way to fix the problem is to uninstall the chart and redeploy with the new setting. At least a way to edit the existing volumeClaimTemplates would solve my problem.
Hi, is someone working on the issue?
Any update on when the feature could be ready for test?
I've just found this issue after encountering the error message, and would like to ask:
Does anyone understand WHY this restriction is in place? Adding new volumeClaimTemplates is not the only issue - it would be nice to be able to update the sizes of existing ones. I don't expect this to resize existing volumes, but it should be possible to specify a new size for future scaling up.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Up. Could someone at least explain why this restriction is in place? If your StatefulSet is configured to auto-scale based on some criteria and you need to change the size of the PVCs, it is very annoying.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/kind feature /sig storage
What happened:
Currently, you get this if you want to update an existing statefulSet with a new volumeClaimTemplate:
What you expected to happen:
Allow the creation of volumeClaimTemplates in a statefulSet.
How to reproduce it (as minimally and precisely as possible):
For example:
Anything else we need to know?:
Some background here
Environment:
kubectl version
): alluname -a
): all