Currently as many components of Krill Manager are implemented as Docker containers as possible so that the only components on the host are host related or pre-requisites, e.g. Docker itself cannot be in a Docker container, and GlusterFS needs awareness of the host network environment to be setup and must be setup before Docker attempts to use it.
However, having the krillmanager command (which is basically just one script) inside a Docker container adds command execution delay, which becomes very significant on AWS and Azure because (can be 10's of seconds):
The Docker container image size is way larger than the krillmanager script as it must also contain Docker and Gluster drivers to be used by the script.
This large (>500 MiB) Docker image size leads to a long slow download on AWS and Azure (not on DO) and also increased container creation time. As the Docker and Gluster drivers must already exist on the host, installing the krillmanager script to the host instead of inside a Docker image would hugely boost the apparent responsiveness of the krillmanager command and make commands like krillc as responsive as its native Rust based counterpart.
Some things would have to be adapted:
Currently the krillmanager script can safely read and write from files in shared storage as it uses a Docker over Gluster mount point. Accessing /data/glusterfs on the host isn't safe and should be avoided, rather a host Gluster mount point would be needed. An additional plus point here is that this would eliminate the need for the krillmanager edit command.
There may be additional tools that need to be installed on the host, e.g. jq.
Currently in theory the krillmanager command being Docker based can move to another host if the master is down. However, in practice as we are not using HA Swarm if the master is lost so is the Swarm manager and many krillmanager commands will not work as they depend local communication with the Swarm master, so this is a non-issue.
The krillmanager_wrapper logic would need to be modified to call krillmanager directly instead of via Docker.
The upgrade process currently checks for new releases of the manager Docker image on Docker Hub. If no manager Docker image is used this will no longer work. Alternatively a thin placeholder image could be published as a trigger for an upgrade, or even still be a container for the new scripts to be installed.
Instead of just extracting the current Bash scripts from the Docker image, an alternate solution (and one more aligned with the development of Krill itself) could be to rewrite the manager and wrapper functionality as a Rust application published to crates.io, and check for updates to the published crate.
Currently as many components of Krill Manager are implemented as Docker containers as possible so that the only components on the host are host related or pre-requisites, e.g. Docker itself cannot be in a Docker container, and GlusterFS needs awareness of the host network environment to be setup and must be setup before Docker attempts to use it.
However, having the
krillmanager
command (which is basically just one script) inside a Docker container adds command execution delay, which becomes very significant on AWS and Azure because (can be 10's of seconds):krillmanager
script as it must also contain Docker and Gluster drivers to be used by the script.This large (>500 MiB) Docker image size leads to a long slow download on AWS and Azure (not on DO) and also increased container creation time. As the Docker and Gluster drivers must already exist on the host, installing the
krillmanager
script to the host instead of inside a Docker image would hugely boost the apparent responsiveness of thekrillmanager
command and make commands likekrillc
as responsive as its native Rust based counterpart.Some things would have to be adapted:
Currently the
krillmanager
script can safely read and write from files in shared storage as it uses a Docker over Gluster mount point. Accessing/data/glusterfs
on the host isn't safe and should be avoided, rather a host Gluster mount point would be needed. An additional plus point here is that this would eliminate the need for thekrillmanager edit
command.There may be additional tools that need to be installed on the host, e.g.
jq
.Currently in theory the
krillmanager
command being Docker based can move to another host if the master is down. However, in practice as we are not using HA Swarm if the master is lost so is the Swarm manager and manykrillmanager
commands will not work as they depend local communication with the Swarm master, so this is a non-issue.The
krillmanager_wrapper
logic would need to be modified to callkrillmanager
directly instead of via Docker.The upgrade process currently checks for new releases of the manager Docker image on Docker Hub. If no manager Docker image is used this will no longer work. Alternatively a thin placeholder image could be published as a trigger for an upgrade, or even still be a container for the new scripts to be installed.
Instead of just extracting the current Bash scripts from the Docker image, an alternate solution (and one more aligned with the development of Krill itself) could be to rewrite the manager and wrapper functionality as a Rust application published to crates.io, and check for updates to the published crate.