Open uhthomas opened 1 year ago
It looks like setting the security context for the deployment and the container templates fixes this.
deployment template (.spec.template.spec
):
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
container template (.spec.template.spec.containers[]
):
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
Sounds like your cluster has a restrictive pod security policy. The last time I tried making our manifest stricter, some things stopped working, so right now I can't guarantee the operator will function correctly with those changes. But we'll look into making the manifest we ship stricter by default.
Thanks @danderson.
This is the default security policy for Talos Linux and possibly for all clusters since Kubernetes 1.23.
Pod Security admission (PSA) is enabled by default in v1.23 and later.
https://kubernetes.io/docs/tutorials/security/cluster-level-pss/
I could be wrong, the KEP suggests this is default from 1.25.
Determinate the right level of strictness will be hard by the nature of what the pod do.
Some clusters will reject it anyway, for example the operator doesn't work in auto-pilot GKE clusters, the warden doesn't let you add the NET_ADMIN cap. (The warden is a custom admission controller which blocks a lot of things)
Unless I'm missing something, wasn't Pod Security Policies "deprecated in Kubernetes v1.21, and removed from Kubernetes in v1.25." (here)?
Yes, PSP was replaced with Pod Security Admission.
https://kubernetes.io/docs/concepts/security/pod-security-admission/
Yes, PSP was replaced with Pod Security Admission.
https://kubernetes.io/docs/concepts/security/pod-security-admission/
Ok, I see my confusion. The securityContext config on a pod/container pretty much remains the same. Thanks.
Using this security config:
securityContext:
runAsNonRoot: true
runAsGroup: 3000
runAsUser: 1000
fsGroup: 2000
seccompProfile:
type: RuntimeDefault
podSecurityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
i've got this error:
{"level":"fatal","ts":"2024-10-08T16:43:10Z","logger":"startup","msg":"starting tailscale server: tsnet: mkdir /.config: permission denied","stacktrace":"main.initTSNet\n\ttailscale.com/cmd/k8s-operator/operator.go:160\nmain.main\n\ttailscale.com/cmd/k8s-operator/operator.go:97\nruntime.main\n\truntime/proc.go:272"}
There is a workaround for talos, I just ran into that with version v1.8.1.
You can add an exemption in the controlplane.yaml to not enforce the PodSecurity on a given namespace.
I added namespace tailscale
to the list and could install the helm chart without warnings afterwards (you need to apply the configuration on the node, it could be added without reinstalling, but it invalidated my kubeconfig)
# API server specific configuration options.
apiServer:
image: registry.k8s.io/kube-apiserver:v1.31.1 # The container image used in the API server manifest.
# Extra certificate subject alternative names for the API server's certificate.
certSANs:
- 192.168.9.119
disablePodSecurityPolicy: true # Disable PodSecurityPolicy in the API server and default manifests.
# Configure the API server admission plugins.
admissionControl:
- name: PodSecurity # Name is the name of the admission controller.
# Configuration is an embedded configuration object to be used as the plugin's
configuration:
apiVersion: pod-security.admission.config.k8s.io/v1alpha1
defaults:
audit: restricted
audit-version: latest
enforce: baseline
enforce-version: latest
warn: restricted
warn-version: latest
exemptions:
namespaces:
- kube-system
- tailscale
runtimeClasses: []
usernames: []
kind: PodSecurityConfiguration
What is the issue?
Following https://tailscale.com/kb/1236/kubernetes-operator/ using https://github.com/tailscale/tailscale/tree/abc874b04e85619afeed5f187f6b6c150f4eefbe/cmd/k8s-operator/manifests results in:
Steps to reproduce
Apply the manifests from https://github.com/tailscale/tailscale/tree/abc874b04e85619afeed5f187f6b6c150f4eefbe/cmd/k8s-operator/manifests with
kubectl apply -f .
and observe the warning.Related? https://github.com/kubernetes-sigs/kubebuilder/discussions/2840
Are there any recent changes that introduced the issue?
Not sure.
OS
Linux, Other
OS version
Kubernetes 1.26.1
Tailscale version
N/A
Other software
N/A
Bug report
No response