Closed shikanime closed 11 months ago
In the last few days I have completely switched to nixpkgs, as it is a proven distribution agnostic package manager with high isolation and reproducibility principles. That covers most libraries and programs in the Linux ecosystem.
It has a good domain-specific declarative language, superior to Bash, that allows dependencies to be assembled through file system composability and symbolic link manipulation, similar to ASDF, but without the need to install nixpkgs tools in the container itself and with much better reproducibility.
Facilitating the building and maintenance of purpose designed development containers.
Proof of concept of building a composable container : Source: https://github.com/NixOS/nixpkgs/blob/a8506b65b03ab7b8331855849b6ac509cbe2cb0c/pkgs/build-support/docker/examples.nix#L44 Image: https://hub.docker.com/r/nixpkgs/nginx
nixos based container would be awesome, given how hard it install nix into docker:)
nixos based container would be awesome, given how hard it install nix into docker:)
From what I've tried, NixOS itself is not really container friendly, as it relies heavily on systemd, which is not available in Docker without privileged mode. I mean it works, and I used it for a few days before dropping it because it was way too suboptimal.
The NixOS principle clearly works as I've been working with it for 2 years, same experience for Mitchell Hashimoto of Hashicorp. Deployed on HyperV (Windows), UTM (MacOS) and kvm (Linux), each of these platforms works surprisingly well. Since then, I only work with VSCode remotely, even remotely from my own computer using my slower Mac to connect to my PC from a cooler room or the garden, because it's so hot these days (っ °Д °;)っ
But the tricky part is that it doesn't follow the filesystem hierarchy standard (FHS) and causes a lot of problems for software that was built with hard-coded dependencies like lib64/ld-linux-x86-64.so.2
.
Maybe the real solution would be to use Bazel as a build system, as it is the only viable solution for horizontal composition I have found that is not Nix with its constraints and complicated vertical composition using Bake or Makefile, which were my first attempts until it became too complex to handle.
here is minimal example of minimally modified container and maximally nixified (flakes, home-manager, minimal apt) https://github.com/ComposableFi/composable/blob/dz/byog-container/Dockerfile. works remote codespace and local devcontaner vs, works with docker-in-docker feature derevation. tested that at least Rust VS extensions work out of box, and all nix stuff.
Since there's numerous potential ecosystems to tap into, we've worked on something known as "Dev Container Features" that are intended to allow you to mix together multiple scripts and devcontainer.json snippets. More information will be in this month's VS Code release notes, but we cover them in the open spec: https://containers.dev/implementors/features
For example, this will add docker-in-docker to the debian image:
{
"image": "debian",
"features": {
"ghcr.io/devcontainers/features/docker-in-docker:1": {}
}
}
Features can have configurable options and you can create and publish your own features using this template: https://github.com/devcontainers/feature-template
FWIW - This is a feature I put together that installs nix and deals with a few different configuration challenges that illustrates using one deployed via this template: https://github.com/Chuxel/feature-library/tree/main/src/nix
Using it is the same as above:
{
"image": "debian",
"features": {
"ghcr.io/chuxel/feature-library/nix;0": {
"packages": "python3"
}
}
}
The contents of the scirpt library folder is being migrated to https://github.com/devcontainers/features with this in mind.
@Chuxel thanks for nice example of nix script.
So with nix, it can be kind of reverted,
we are going to use dockerTools to build images. so no docker files anymore.
extensions?
nix can do it too - it is kind of it is nix. so we can mix and match different containers depending on shells, home managers, etc.
also we want flake.lock for global company sharing.
actually going to delete as much features as possible via nix. I think features are excessive when your are nix.
also we will have all contaners prebaked, so really no need for special job to rebake it. and their bake is gate by nixops, and fully nixified github action with one gate dummy job before side effectors will happen.
so with nix can switch company to new stuff in several hours. mixing features is not managable.
also nix is lazy, we want to produce nix from repo with something build as part of repo. so you kind of having latest tooling.
so nix is strictly superior can be achieved with json/features. etc.
also features unsecure, and usnsafe. some featues seems change /tmp ACL/perms and fail nix later. so i do not trust features.
on top of know stuff which people market, nix also does monorepos and code ownership/gating much better. so going full nix is best. without divergion to features.
@dzmitry-lahoda Yeah, unfortunately, I don't think we can assume that all users will be willing to use Nix and only Nix. As you rightly pointed out there's varying levels of comfortably with any given packaging ecosystem depending on the team or company. Enabling Nix becomes an important consideration, however.
That said, there is little to prevent you from using Nix as you describe already if you've opted to embrace that ecosystem - which is great! Features are really to allow people without those skills the ability to assemble things together - whether using their own features or community ones.
You may also be interested in https://github.com/devcontainers/spec/issues/18 which would allow dev container metadata to be added to a general image rather than requiring a devcontainer.json file. The CLI used to drive all of this is also OSS at https://github.com/devcontainers/cli with the idea that additional integrations can be proposed and added by the community over time.
To your point, pre-baking images is important, so there's a GitHub Action that uses the CLI to build and you can do it from the CLI itself. This can also be used during CI. https://github.com/devcontainers/ci
also features unsecure, and usnsafe. some featues seems change /tmp ACL/perms and fail nix later. so i do not trust features.
@dzmitry-lahoda FWIW - You will experience this if you are on Linux and your local user ID's UID does not match a non-root user in the container regardless of how nix is installed. If you run all your containers as root, there's not a problem. However, any bind mounted source code on Linux will potentially be read and read as root (depending on your docker daemon setup). Dev containers will automatically update the UID of any user you reference to avoid this problem. However, since nix force installs itself under /nix, the UID update will result in the /nix folder being owned by root (since the file's UIDs won't change). That's fine if you never want to install anything via nix once you're in the container itself - otherwise its a problem. The classic resolution here is a group, but Nix doesn't handle that well.
So, any code you see in my script is purely about working around that reality. It's not strictly required - and I'd likely make it an option... really used it as an example of a personal feature since this one isn't official. Anyway, there's more discussion of this in https://github.com/devcontainers/spec/issues/25 if you encounter it in your own explorations.
EDIT: Updated to just use the nix-daemon in this scenario - added comments on why certain things are done in the script as well.
@Chuxel Dev Container Features (still reading) is kind of how I used Makefile (could be reproduced using Bake) to define container dependencies and Dockefiles as a scripting platform, but I ran into a pretty nasty problem when I was building my fourth development environment following a certain among of times (thinking about how to update everything without breaking everything...).
FROM base
RUN apt-get install ...
Here's the problem: I started building several images assuming to use a Debian base of a certain version and created another one that required another Debian version because of some APT dependency and CUDA dependencies from another source list.
As the dependencies are decoupled but the build context still depends on what exists in the base, like the system package manager (e.g. RPM) or some runtime (e.g. Bash not present in Alpine by default). The scripts are no longer isolated, which becomes really hard to manage, test and make work in the long run.
Without finding a way to have build metadata at application level (docker already manage the cpu architecture) such os version, system constraint, shared libraries (e.g. libstd++ for Python, libcudart for Tensorflow/PyTorch)... on Dev Container Features. Or having an imperative shell based library for building guard and assertion but bring a lot of complexity in the scripts.
Which is the point of having a strong build system that should have seen such issues many times, in my opinion. I use Nix as my daily driver for exactly this reason because of my position as an extreme polyglot engineer (someday I may work in Go another day on OCaml and on side projects in Erlang/Elixir then Python for most of the data engineering tasks), but I still don't recommend putting it in front of any user who just wants a simple and easy workflow.
This is still an open question and I'm still trying to solve it, experimenting with Ansible, Nix, DNF, APT+CHROOT, Bazel, Bake, Makefile... for building containers.
One of my last attempts was to use Direnv and Nix Flakes (not strictly necessary), maybe this could be an inspiring way to go?
Yeah the idea behind a feature is to centralize the complexity into something more atomic than a single package. It does not eliminate it for sure. e.g., if you need curl, a feature should check for it and install if missing. For architecture / distro, I also strongly want to get https://github.com/devcontainers/spec/issues/58 in - which automates uname
and /etc/os-release
checks. Furthermore a Feature can encapsulate installers for multiple packaging systems depending on the scenario - again centralizing the complexity to ecosystem experts so that individual developers do not have to mess with it. There are also tools that you're better off just curl'ing/git pulling to install (like the pack CLI, kind, nvm, Oh My Zsh!) - so these wouldn't involve any package manager at all. So the idea is a roll up that doesn't pick a packaging system and enables easy to specify options. Enabling Nix Flakes then is an enhancement to the nix Feature above rather than a fundamental change. The nix feature above allows you to specify a list of packages to install in addition to laying down Nix itself. Another option can be to point to a flake - so the model can flex as Linux ecosystems do the same. (FWIW - The nix feature above also should work on systems that use yum or dnf package managers.)
Put another way, these are more container specific "installers" than packages really.
But yeah, as you mentioned, at the moment, the alternative seems to be to go down the full Nix path soup-to-nuts and have it generate the image, which is very powerful, but not approachable for novices - and then you still need to deal with container settings and other dev container metadata.
Given that the packaging and image generation world is very fickle, and there's a new thing seemingly every week, the goal is also to be able to integrate - you shouldn't have to use features if you don't want to. The goal for the spec is that label support can help building into other build systems as they become available - I'm increasingly of the belief that there's not going to be a one-size-fits-all answer that can nail what novices, experts, and everyone in the middle needs or wants. On the novice end, Buildpacks, Nixpacks, etc are also all additional abstractions with a more automated spin - so the label support will be a way to layer in metadata into those systems.
Not sure I have a perfect answer either - so being able to move with communities while still having a model that can integrate with them in a more simple form for novices seems like the best compromise so far.
Nix and flakes are quite cool, if there is a pre-baked dev image, it is much easier to generate new project specified config or even new container. Cannot wait it!
so i ended with home manager based on devenv https://discourse.nixos.org/t/github-codespace-support/27152/2. . it as close as possible i got operatin in pure nix for languages suppors and having dokcer inndocker frature
@meicale @dzmitry-lahoda Devcontainer features aim to remove the necessity for a specific pre-configured container image. You can achieve this by using a JSON configuration that includes an image, features, an update command, and a mount.
The JSON configuration is as follows:
{
"image": "mcr.microsoft.com/vscode/devcontainers/base:jammy",
"features": {
"ghcr.io/devcontainers/features/nix:1": {
"extraNixConfig": "experimental-features = nix-command flakes"
},
"ghcr.io/devcontainers/features/common-utils:2": {
"configureZshAsDefaultShell": true
},
"ghcr.io/christophermacgown/devcontainer-features/direnv:1": {}
},
"updateContentCommand": "nix develop --build --accept-flake-config --impure",
"mounts": ["source=nix,target=/nix,type=volume"]
}
It is recommended to create a distinct mount dedicated to the nix-store shared between containers, as its size can increase depending on the amount of software being utilized. However, I have occasionally encountered issues with the daemon being in an error state when using multiple containers simultaneously. The common-utils and Direnv features are primarily for convenience, particularly when using devenv, unless there is a specific reason to use them.
The use of updateContentCommand as per the specification primarily used for pre-caching, using GitHub's prebuilds feature.
It should be noted that I am also using home-manager via the vscode dotfiles and codespace dotfiles. However, there is a caveat that container behavior may differ from that of a full-fledged user on a virtual machine. Nevertheless, a similar shell script like install.sh can be used.
My current suggestion is to use devcontainer container images, devcontainer features, and nix flakes as an abstraction layer for your development environment. For example, I set up "system" wide software with devcontainer features such as Nix, ZSH, and NodeJS (some extensions require it, such as Sonarlint) and then use Nix Flake as the "project" scope dependency management system, such as NPM.
Here's a typical Direnv that I use:
# If nix is available, use it to set up the build environment with exactly what we need.
if has nix; then
use flake --accept-flake-config --impure
fi
# Load dotenv file
dotenv_if_exists
# Set up Python virtualenv
layout python
I hope this information helps you. I migrated my entire development environment to devcontainer using this stack and have created a standard for projects I'm involved with at my company, which has made things more convenient.
I'm not a good writer but I also share some of my vision in this article.
@shikanime first of all, thank you for the .devcontainer.json
example. It is something that works and is available right now.
However I would like to chime in some concerns:
I'm personally experimenting with building a devcontainer using nix tooling, to address the reproducibility thing, but given that devcontainers itself relies heavily on docker/podman I would say that adding "native" support for at least one deterministic OCI builder (could be nix or something else) is something that would pay off in the long run. For instance, container features could simply declare their dependencies instead of imperatively checking and installing them.
I believe that devcontainer features does solve this issue discussion, thanks for the debate 🤗
Since my previous contributions, I've been thinking a lot about how to compose images based on Dockerfiles, for projects that use multiple languages or tools, like a monorepo repository.
As in the practice of dogfooding, I tried to find each edge case using my own development container environment using a mix of Makefile and Dockerfile, and the recently introduced
library-scripts
took the same approach as I did in my experiment, maybe Dockerfile is good for creating a base system using inheritance. In my experiment, I came across ASDF for the installation of tools in userspace, which is based on the same principle aslibrary-scripts
but does not expose the implementation to the end user.In this regard, I think of Fedora Silverbullet as an interesting source of inspiration, where the base system packages are kept immutable and append-only as a Dockerfile
FROM
, but thednf
package manager is preferable for user space applications installation since it doesn't require this heavy buf safe process.What do you think? Perhaps the ASDF is not suitable in this context, but as I see it, a similar tool is essential.