Open MarcusElevait opened 3 weeks ago
Usually, it's recommended to use labels.
Would you please detail why you can not use labels in this use case ?
We have zerotier client and traefik running in the same pod. For zerotier configuration we need the exact name of the zerotier container. It has something to do with our zerotier redundancy setup, i will not bother you with details there :-) But labels are not working in this case, because within the zerotier container we don't know the labels.
If you like, we can raise a PR for the statefulset?
It's possible to request API Server from a pod, with RBAC and service account token. Service can also be used. CoreDNS also.
Would you please detail why you can not use recommended and widely used k8s mechanism ?
Okay so i try to explain our use case:
In the same pod with the traefik container we have two zerotier container running. We have two for redundancy reasons. Both zerotier-clients need to join a zerotier-network. To join, we need to provide to each of them a key-pair. Each key-pair is saved in a file. So we need to provide one of these files to each of the two clients. Until here it's pretty straight forward.
But, when one container gets killed (for whatever reason) and needs to be spin up again, it's needs to know which of the two files it can take, i.e. which file is not used by the other container. With a statefulset we could just name the file like the container name, because it would be static. With the deployment, we need to integrate a workaround, to figure out which file is already used by the other container.
So i guess our use-case is pretty special, but for us it would make it way easier, if it would be a statefulset.
Does this make sense to you?
I'm sorry but still not 😅. Especially on redundancy : why two container with traefik in the same pod ? It means with 3 proxy, there are 6 zero tier containers and a total of 9 containers ?
On specific network need like when using mesh, known architecture is to use one sidecar per pod, and so two containers per replica. With 3 proxy, there are 3 sidecar containers which provide specific network layer.
An other known architecture is to set network specific layer on node directly, see this blog post or this chart with daemonset or here.
why two container with traefik in the same pod For redundancy reasons. We can't make use of replicas here to have redundant zerotier-clients, because then we would have again the problem that we need to provide a unique key-pair to every zerotier-client. And how could we provide a unique key-pair to every replica? I don't know if there is a possibility for this
Welcome!
What did you expect to see?
In our use-case we deploy traefik in a pod alongside with a zerotier client, so that we can access services in the cluster via our company zerotier network. Those zerotier client gets a unique identity in the zerotier network. For this it would be good to install it as a statefulset instead of a deployment, because we need the pod name to be static. Would be cool if there would be an option to install it as statefulset. (Similar to the current option to deploy it as a daemonset instead of a deployment)