Open rajivml opened 4 years ago
Hello @rajivml!
I have seen this PR #1016, where this feature is being implemented and just want to check, so once this PR is merged and if I switch to use buildah as my container manager then the issue that am facing here will also be resolved ?
Yes, that's correct. Using buildah
as your container manager would avoid having to run Docker-in-Docker setup, described in #1016.
1016, the issue that am facing in 1016 is, the s2i containers that are getting spawned are repeatedly getting killed on Kubernetes when the Nodes are under memory pressure and am not able to figure out how to avoid them getting killed .
However, you would still need to investigate the root of this issue. Using buildah
as a container manager would save you resources, but I would not be able to determine if that's enough to solve the memory pressure issue.
Am assuming with this implementation i.e through buildah in place the entire logic to build a container will execute within the same pod that has triggered the build using s2i, so that the memory and cpu limits that are applicable on POD will be applicable for the s2i build as well since the build will be running inside the pod
That's true for buildah
as much as for DinD. In the DinD setup you're using has a sidecar running the docker instance, and Kubernetes sidecars are in the same POD.
Using buildah
container manager would avoid having to run a sidecar and makes things fairly simpler.
Thanks @otaviof , for your immediate reply on this,
I have seen this PR #1016, where this feature is being implemented and just want to check, so once this PR is merged and if I switch to use buildah as my container manager then the issue that am facing here will also be resolved ?
Yes, that's correct. Using
buildah
as your container manager would avoid having to run Docker-in-Docker setup, described in #1016.1016, the issue that am facing in 1016 is, the s2i containers that are getting spawned are repeatedly getting killed on Kubernetes when the Nodes are under memory pressure and am not able to figure out how to avoid them getting killed .
However, you would still need to investigate the root of this issue. Using
buildah
as a container manager would save you resources, but I would not be able to determine if that's enough to solve the memory pressure issue.
Yeah true, we are investigating the issue but we are not able to pinpoint the issue, I think since we are setting the limits on the Pods the s2i container that is getting spawned is running in best effort mode and is getting killed as soon as kernel detects OOM scenario, we are trying to reduce the no of pods which are responsible for building docker images so that we throttle the requests and also we will try setting the limits on the pods well
Am assuming with this implementation i.e through buildah in place the entire logic to build a container will execute within the same pod that has triggered the build using s2i, so that the memory and cpu limits that are applicable on POD will be applicable for the s2i build as well since the build will be running inside the pod
That's true for
buildah
as much as for DinD. In the DinD setup you're using has a sidecar running the docker instance, and Kubernetes sidecars are in the same POD.
Thanks for letting me know how s2i containers runs in DinD mode, I am not aware that the s2i container spawned runs within the same Pod which triggered the build, am under assumption that this container will be spawned on the same Node where DinD pod is running and we don't have control on this container, if it's running as a side car within the same DinD pod, then I think setting the limits on the Pod should definitely help
Using
buildah
container manager would avoid having to run a sidecar and makes things fairly simpler.
BTW, When are we targeting to merge this PR, any idea ? because the DinD thing was raised as an security issue as well
HI @otaviof , just want to check if buildah support is still under radar, because I don't see much activity over this PR https://github.com/openshift/source-to-image/pull/1003 , so want to check if they are any timelines that you guys are targeting, if this is a long shot, then I will look at other alternatives for builds rather than using dind
@rajivml I'm afraid the support for buildah
shall take some more thinking and efforts before we can introduce it. Sorry, it might take longer than expected.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/lifecycle frozen
This is a feature we want for the future of s2i.
HI,
@otaviof I have seen this PR https://github.com/openshift/source-to-image/pull/1003, where this feature is being implemented and just want to check, so once this PR is merged and if I switch to use buildah as my container manager then the issue that am facing here will also be resolved ?#1016, the issue that am facing in 1016 is, the s2i containers that are getting spawned are repeatedly getting killed on Kubernetes when the Nodes are under memory pressure and am not able to figure out how to avoid them getting killed .
Am assuming with this implementation i.e through buildah in place the entire logic to build a container will execute within the same pod that has triggered the build using s2i, so that the memory and cpu limits that are applicable on POD will be applicable for the s2i build as well since the build will be running inside the pod