Backend.AI is a streamlined, container-based computing cluster platform that hosts popular computing/ML frameworks and diverse programming languages, with pluggable heterogeneous accelerator support including CUDA GPU, ROCm GPU, TPU, IPU and other NPUs.
Currently, ttyd is embedded in the krunner package (ubuntu, centos, alpine).
Since we have a large pre-built Python binary inside the krunner packages (for each major version of each base Linux distros), though the size limit for our PyPI upload (increased to 120 MiB from 60 MiB by request), it is still tight to include all intrinsic app binaries there, especially some bing ones like code-server.
Let's add a new "container app provider" plugin category named as backendai_container_app_v20.
There should be a new plugin context which has a mechanism to auto-deploy & auto-update the "app volumes" from plugin-provided binary archives upon context (=agent) initialization.
When updating, it should not remove the existing volume if there are any existing container using it, like the current krunner deployment code. Currently it is not easy to inspect the running kernel containers from the plugin's view, let's just skip removal of old volumes.
We use the same tar.xz archive format to maximize compression.
The app volume names must include compatibility tags, including architecture (e.g., x86_64), base-distro (e.g., ubuntu20.04), and per-plugin binary version (e.g., v1).
The per-plugin volume is mounted into /opt/backend.ai-apps/{plugin-entrypoint-name}/ directoy of all compatible containers. (e.g., /opt/backend.ai-apps/ttyd/)
The launch sequences for these apps are currently hard-coded as the intrinsic services. Let's just keep as-is for now.
The paths in the hard-coded launch sequences must be updated to use the new per-app mount paths. (e.g., /opt/kernel/dropbear -> /opt/backend.ai-apps/ssh/dropbear)
The mount paths upon container creation must be replaced with the per-app mount paths retrieved from the new plugin context and the relevant executable names under them.
Since the target apps are "intrinsic", the agent's setup.cfg must have explicit dependency links to the target container app provider plugins distributed on PyPI.
Currently, ttyd is embedded in the krunner package (ubuntu, centos, alpine). Since we have a large pre-built Python binary inside the krunner packages (for each major version of each base Linux distros), though the size limit for our PyPI upload (increased to 120 MiB from 60 MiB by request), it is still tight to include all intrinsic app binaries there, especially some bing ones like code-server.
Let's split out our intrinsic apps using the common plugin interface recently revamped:
backendai_container_app_v20
.x86_64
), base-distro (e.g.,ubuntu20.04
), and per-plugin binary version (e.g.,v1
)./opt/backend.ai-apps/{plugin-entrypoint-name}/
directoy of all compatible containers. (e.g.,/opt/backend.ai-apps/ttyd/
)/opt/kernel/dropbear
->/opt/backend.ai-apps/ssh/dropbear
)setup.cfg
must have explicit dependency links to the target container app provider plugins distributed on PyPI.