Closed MikeSpreitzer closed 5 months ago
I tried a few work-arounds, none with full success, described in Slack starting at https://kubernetes.slack.com/archives/C021U8WSAFK/p1691118703785549
@pdettori said that this is fixed in main
. I must admit that I do not remember whether I tested main
or the latest release.
@MikeSpreitzer since #54, you should now have the flexibility to override security contexts as needed, e.g. with {}
. That should help with this issue at hand.
@embik thanks, would it be possible to publish a new release for the helm chart (e.g. 0.2.6) since the latest release (0.2.5) does not include the change and so setting security context is still not possible following the instructions for usage in https://github.com/kcp-dev/helm-charts#usage
@pdettori I believe the securityContext changes mentioned in this specific issue never made it into a release (0.2.5 is from February and the commit that broke OpenShift deployment as far as I understood must have been https://github.com/kcp-dev/helm-charts/commit/be75345f70e6819991145801fd8e4ac44d4c1ee4). Or is this a separate concern just related to being able to set the securityContext at all?
In any case, I think we are planning to release 0.3.0 somewhat soonish since there are one or two breaking changes either merged already or planned. At that point the chart should be much more flexible.
@pdettori FYI, chart version 0.3.0 has been released. Maybe someone can check if this issue is still present / cannot be fixed by overriding the securityContext in values.yaml
.
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@kcp-ci-bot: Closing this issue.
I tried using the Helm chart to create a Helm "release" in an OpenShift cluster, and the ReplicaSet for the kcp server is unacceptable to OpenShift.
I put the following in my values YAML:
I found that the ReplicaSet for kcp never got any Pod object created. A
kubectl describe
of that ReplicaSet included the following Event, which explains the problem (line breaks added for readability).