Open sgaist opened 1 year ago
@sgaist thanks for thinking about this! I also want to avoid locking into docker or podman specifically and allow for various builders.
The intent to support other builders can be seen in the naming of the daemonset docker-api
and the exposed volumes etc, as either docker or podman can start a daemon to expose a docker compatible API.
To expose a docker compatible API via podman, I think the image and args needs to change, nothing else. To minimize complexity, the actual file socket ought to be exposed in a path unrelated to the choice of what software provides the docker api compatible socket.
@consideRatio something else came to my mind about that matter but I currently fail to find the corresponding ticket.
There was at some point a discussion, if memory servers well with @manics in the loop, about using a TCP connection rather than the unix socket so we could decouple things a bit more from the host. The main drawback I saw if using a service for that is that we are going to lose the locality of the build and thus might rebuild the same image several time as the request might hit different pods depending on the deployment. However, it might be something worth taking into account with the binderhub-service.
The BinderHub Kubernetes builder already bypasses the default build pod scheduling if sticky builds are enabled, and assigns the build node directly: https://github.com/jupyterhub/binderhub/blob/a05b3420be62d6240f010553fb5c70d76a1fba31/binderhub/build.py#L334-L347
Using a TCP connection instead of the host socket would require listing the available pods and connecting by IP instead of relying on a K8s service, or alternatively using a stateful set for the build pods so their names follow a standard <statefulset>-N
pattern, but the logic should be the same.
Context
Since https://github.com/jupyterhub/binderhub/pull/1531, BinderHub supports Podman as a replacement for Docker and also brought modifications to the helm chart in such a way that it should be simpler to add other image building systems. Since this project's goal is to provide a standalone deployment of BinderHub, shouldn't it follow its build service deployment part more closely ? Doing so would avoid a drift in features between the classic BinderHub and BinderHub service.
Proposal
Just as a quick thought, it might be possible to use container-builder/daemonset.yaml as a reference implementation or maybe even as is but this would require some changes to the current configuration proposed.
Updates and actions
No response