Closed caesar0301 closed 2 weeks ago
This issue is similar to https://github.com/kubernetes-client/c/issues/158. If we always pass "0" to API server, the default value in Kubernetes will be broken (that might introduce the issue https://github.com/kubernetes-client/c/issues/170)
It seems the best solution would be to replace int run_as_user
with int *run_as_user
, but this will bing great changes to openapi-generator/c template. https://github.com/kubernetes-client/c/pull/160#issuecomment-1332782208
The second solution is to introduce a magic number just like you did. https://github.com/kubernetes-client/c/pull/160#issuecomment-1331611270
I'm thinking of a better solution. What do you recommend?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Accoding to kubernetes bug fix https://github.com/kubernetes/kubernetes/issues/78308, pods that explicitly specifying
runAsUser: <uid>
orrunAsGroup: <gid>
should start the container always in every launch using given user or group. IfrunAsUser
orrunAsGroup
is not set, the container should run with USER specified when building the image.Now we get the sdk implementation that
runAsUser: 0
is ignored when the user forces to launch the container with root, overwriting default image USER. The same torunAsGroup: 0
.https://github.com/kubernetes-client/c/blob/859fc3f5eecda59f4eea6b07431aa2ebf38db779/kubernetes/model/v1_security_context.c#L134
https://github.com/kubernetes-client/c/blob/859fc3f5eecda59f4eea6b07431aa2ebf38db779/kubernetes/model/v1_security_context.c#L118
I have a local fix to remain the semantics meaning to introduce the invalid user and group as -1: