Closed namloc2001 closed 3 years ago
@vdemeester happy to move the convo (from https://github.com/tektoncd/pipeline/issues/3625) to here if it's more OpenShift related. Also including @sbose78 given their changes in #503. My question being:
Is there a reason why privileged wasn't downgraded to restricted? That way the pipeline pods are implemented with minimal SCC permissions and if additional permissions are required above restricted SCC, we apply these to the SA we attach to the pipeline- or taskRun? With the current method, I don't believe I can deploy using the restricted SCC.
I can see from #503 and #504 that privileged was changed to (originally anyuid, but then further to) nonroot SCC. So via the changes, the SA tekton-pipelines-controller
is granted nonroot
now.
What I'm trying to work out is whether this could go further (i.e. restricted SCC), because in 00-release.yaml I can see:
containers:
- name: tekton-pipelines-controller
image: quay.io/openshift-pipeline/tektoncd-pipeline-controller:v0.18.0
args: [
...
...
# This is gcr.io/google.com/cloudsdktool/cloud-sdk:302.0.0-slim
"-gsutil-image", "gcr.io/google.com/cloudsdktool/cloud-sdk@sha256:27b2c22bf259d9bc1a291e99c63791ba0c27a04d2db0a43241ba0f1f20f4067f",
# The shell image must be root in order to create directories and copy files to PVCs.
# gcr.io/distroless/base:debug as of October 16, 2020
"-shell-image", "registry.access.redhat.com/ubi8/ubi-minimal:latest"
...
...
securityContext:
allowPrivilegeEscalation: false
# User 65532 is the distroless nonroot user ID
and
containers:
- name: webhook
# This is the Go import path for the binary that is containerized
# and substituted here.
image: quay.io/openshift-pipeline/tektoncd-pipeline-webhook:v0.18.0
...
...
securityContext:
allowPrivilegeEscalation: false
# User 65532 is the distroless nonroot user ID
So with reference to tekton-pipelines-controller
does the shell-image (registry.access.redhat.com/ubi8/ubi-minimal:latest) get launched as root? And if so, how given that the SCC now aligned to this is nonroot
?
Assuming that all can/must now run as nonroot, does that mean this deployment can take place under restricted SCC? Or are other requirements for the deployment stopping this?
This is definitely a security issue which leads to privilege escalation. I don't think it is wise for OpenShift Pipelines to go GA without solving this issue.
@aelbarkani, assuming the setting change from privileged
SCC has been made to change it to nonroot
SCC (rather than anyuid
SCC), it is not a security issue which leads to privilege escalation (as far as I am aware). Restricted and nonroot SCC are pretty much identical with the exception of the obvious difference being that nonroot doesn't force the UID to be assigned from the project range (whilst ensuring the container cannot run as root user).
My concern is that having the SA tekton-pipelines-controller
given nonroot
SCC access, means that the model of "the SA I use will default to restricted
SCC unless I configure things differently" is broken by Tekton on OpenShift. This means users need to work with two different styles.
It might also have implications, given that to run via restricted SCC, we chgrp -R 0 /path/to/dir
and chmod g=u /path/to/dir
our containers for OpenShift restricted
SCC compatibility, but we won't be provided with GID=0, unless we run as restricted. It shouldn't be a problem as we can set the runAsUser
, but it's just "another thing to be aware of".
I think #492 might provide a resolution/mechanism to answer this.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
Hi, I originally opened this here: https://github.com/tektoncd/pipeline/issues/3625 but was requested/informed to reopen it here. (@gabemontero @vdemeester @siamaksade @sbose78 FYI)
Expected Behavior
I expect the serviceAccount associated with the pipelineRun to have SCC controls applied to it. If it is a new serviceAccount (or any serviceAccount that hasn't been explicitly granted alignment to other SCCs elsewhere), I expect these to be in line with the restricted SCC.
Actual Behavior
I am deploying a pipelineRun script via
oc create -f pipeline-run.yaml
after having logged into the cluster with my (cluster-admin) personal account. Config is:The resultant pod is deployed with UID=1005, however I have not granted the
new-pipelinerunner
serviceAccount the capabilities of any SCCs, therefore by default it should be only able to function in line with the restricted SCC settings, one of which is that the runAsUser is the project UID range (openshift.io/sa.scc.uid-range: 1005450000/10000
).And yet if I run the
id
command on my pipeline pod, I get:Steps to Reproduce the Problem
Create a serviceAccount on OpenShift in the tekton-pipelines namespace:
Run any pipelineRun with this serviceAccount attached, either do or don't specify podTemplate.securityContext on the pipelineRun.
With the pod running, confirm the UID of the user, it should be in the project range (
oc exec -it <pod-name> [-c container_name] -- sh -c "id"
).Additional Info
Kubernetes version:
Output of
kubectl version
:Output of
oc version
:Server <I've redacted> kubernetes v1.16.2+853223d
v0.15.2
System Info: Kernel Version: 3.10.0-1160.6.1.el7.x86_64 OS Image: Red Hat Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.16.6-18.rhaos4.3.git538d861.el7 Kubelet Version: v1.16.2+853223d Kube-Proxy Version: v1.16.2+853223d