Closed pohly closed 2 years ago
/sig apps
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle stale
Still a valid question...
+1. I'd like to know this too. Having statefulsets without a service makes sense to me if they don't care about a stable identity, just that there is no more then 1.
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Would a headless service be the solution to this problem?
Sometimes you don’t need load-balancing and a single Service IP. In this case, you can create what are termed “headless” Services, by explicitly specifying
"None"
for the cluster IP (.spec.clusterIP
).
Noah Huppert notifications@github.com writes:
Would a headless service be the solution to this problem?
It's better, but still a useless object that could be avoided if we could get clarified that a StatefulSet is okay to use without a Service.
What is the status of this?
@Noah-Huppert Yes, a headless service was the solution for me. From what I understand it's the only workaround at the moment.
I've launched statefulsets without services and it seems to work ok. We're just looking for a guarantee that this will continue to be safe going forward. The docs say its required, but the implementation does not enforce it last I looked.
We will always support this. You can close this issue. Please feel free to open an issue against the docs repository, or (if you are feeling generous) a PR against the same.
From sig-apps channel: @kowens @kfox1111 if you don’t need DNS for the Pod it is supported
@kowens the reason the docs are written as is is for conceptually simplicity. You do need a Service if you want to generate a CNAME and SRV records for the Pods it is the most common use case
So, they won't break serviceless statefulsets.
So, we had fun facing the issue today, learned a bunch of networking, passing it on, hopefully someone else will find this useful.
If you create a StatefulSet with a headless service, the ports exposed in the headless service don't matter. The communication in this case happens via DNS directly to the pod. kube-proxy
is not involved at all. Found out the hard way because the exposed port on the service was configured wrongly but the communication was still happening, but when istio was installed, it did not work 😄
fyi, follow-up PR here: https://github.com/EventStore/EventStore.Charts/pull/52
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
It looks like when going to apps/v1, this actually did become a required feature: error: error validating "test.yaml": error validating data: ValidationError(StatefulSet.spec): missing required field "serviceName" in io.k8s.api.apps.v1.StatefulSetSpec; if you choose to ignore these errors, turn validation off with --validate=false
This should be made less restrictive if possible.
@kfox1111 if you turn validation on in apps/v1beta2, you will still face this issue
Deploy failed: error: error validating "STDIN": error validating data: ValidationError(StatefulSet.spec): missing required field "serviceName" in io.k8s.api.apps.v1beta2.StatefulSetSpec; if you choose to ignore these errors, turn validation off with --validate=false
I was told that it was marked as required in the docs but it actually wasn't required. But looking at it now, it is actually required to pass validation. I would request that this be made optional in validation so that it is actually optional in practice. Turning off all validation on the statefulset to make it the service optional is not a good way of doing this. For things not needing DNS, the service should be optional. There are a lot of things that could benefit from being statefulsets but don't need DNS.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
/reopen /remove-lifecycle rotten
@kfox1111: You can't reopen an issue/PR unless you authored it or you are a collaborator.
@kfox1111 should be a collaborator, right?
/reopen
@pohly: Reopened this issue.
@pohly: This issue is currently awaiting triage.
If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
/remove-lifecycle rotten
https://github.com/kubernetes/kubernetes/issues/69608#issuecomment-594677833 still seems to be the state of this issue.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
At this point the two CSI examples from the original ticket have gone away (see https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/pull/521 and https://github.com/ceph/ceph-csi/pull/414 which replaced the StatefulSet singletons with Deployments using leader-election).
Are there any practical/current use-cases for StatefulSet without a backing Service? The ticket's approaching three years old, and has not yet been triaged, suggesting there's not much appetite/motivation to advanced the proposed change in https://github.com/kubernetes/kubernetes/issues/69608#issuecomment-594677833.
This is still relevant, for example here: https://github.com/kubernetes-csi/csi-driver-host-path/blob/c480b671f63c142defd2180a6ca68f85327c331f/deploy/kubernetes-1.21/hostpath/csi-hostpath-plugin.yaml#L189-L199
The Service referenced there never gets created because this issue clarified that this is okay.
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
/reopen
https://github.com/kubernetes/kubernetes/issues/69608#issuecomment-1002681415 intended to keep this ticket active, but named the wrong lifecycle, I assume by accident.
@TBBle: You can't reopen an issue/PR unless you authored it or you are a collaborator.
/reopen
@djschny: You can't reopen an issue/PR unless you authored it or you are a collaborator.
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
Developers have started to use
StatefulSet
without aService
definition as a way to deploy a pod on a cluster exactly once, for example here: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/master/deploy/kubernetes/stable/controller.yamlConceptually that makes sense when the app running in the pod doesn't accept any connections from outside. In this example, it's the CSI sidecar containers which react to changes in the apiserver.
But is this a supported mode of operation for a
StatefulSet
? https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#statefulset-v1-apps says thatserviceName
is mandatory. The example above gets around this by settingserviceName
without defining a correspondingService
.The concern (raised by @rootfs in https://github.com/ceph/ceph-csi/issues/81) is that while it currently works, future revisions of the
StatefulSet
controller might not support this anymore.What you expected to happen:
Clarify in the docs that the
Service
definition is not needed or (better?) makeserviceName
optional.Environment:
kubectl version
): tested on 1.11 and 1.12