Closed dymurray closed 2 years ago
Openshift replaces the SCC
with restricted
value even if we set the pod annotation openshift.io/scc: anyuid
manually in pod restore plugin.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
We encountered an issue in our testing where a pod running prior to backup did not get assigned to the same SCC when the pod was restored via Velero.
The pod running on the OCP cluster was assigned to the
anyuid
SCC due to being cluster-admin when the pod was created (default SCC for cluster-admin created resources isanyuid
). At restore time, since the Velero SA is the user that creates the pod during the restore process the pod was instead assigned to therestricted
SCC.There are two potential solutions for this... the first is that the pod definition should include enough info about it's desired securityContext to prevent this from happening (this is if every application was written perfectly which we should guard against) and the second is that we may want to investigate if setting the SCC annotations on the resource itself forces OCP to schedule the pod in the same SCC.