nix-community / nixos-generators

Collection of image builders [maintainer=@Lassulus]
MIT License
1.77k stars 136 forks source link

Add docker/podman example (and probably examples for other targets) #176

Open euonymos opened 2 years ago

euonymos commented 2 years ago

Hi! I am trying to pack a bare NixOS as a docker container with the following flake and getting an error. I wonder if there are examples I can start off with? Or probably there is a blunder in my flake or something else?

{
  description = "bare NixOs";
  inputs = {
    nixpkgs.url = "nixpkgs/nixos-unstable";
    nixos-generators = {
      url = "github:nix-community/nixos-generators";
      inputs.nixpkgs.follows = "nixpkgs";
    };
  };
  outputs = {self, nixpkgs, nixos-generators, ...}: {
    packages.x86_64-linux = {
      container = nixos-generators.nixosGenerate {
        pkgs = nixpkgs.legacyPackages.x86_64-linux;
        modules = [
          ({ pkgs, ... }: {
            services.getty.autologinUser = "root";
          })
        ];
        format = "docker";
      };
    };
  };
}

This builds well, but the result seems to be broken:

docker load -i ./nixos-system-x86_64-linux.tar.xz                                                                                                          
open /var/lib/docker/tmp/docker-import-430931981/bin/json: no such file or directory
Lassulus commented 2 years ago

I only managed to get docker formats running as privileged podman containers. Maybe this can also be done for privileged docker containers, but I never tried.

Unsure what the error is here, nix-locate also can't find any package which includes bin/json.

Mic92 commented 2 years ago

podman has special systemd support as well. There is a --systemd=true flag.

euonymos commented 2 years ago

Turned out this was just the misuse of podman load, I should have used podman import instead. But anyway I am unable to run the container.

podman run --runtime runc --systemd=true --log-level=debug localhost/nixos:bare /init

INFO[0000] podman filtering at log level debug          
DEBU[0000] Called run.PersistentPreRunE(podman run --runtime runc --systemd=true --log-level=debug localhost/nixos:bare /init) 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf" 
DEBU[0000] Merged system config "/etc/containers/containers.conf" 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/lib/containers/storage 
DEBU[0000] Using run root /run/containers/storage       
DEBU[0000] Using static dir /var/lib/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/libpod                    
DEBU[0000] Using volume path /var/lib/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that metacopy is being used 
DEBU[0000] Cached value indicated that native-diff is not being used 
INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true 
DEBU[0000] Initializing event backend journald          
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument 
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/runc"            
INFO[0000] Setting parallel job count to 25             
DEBU[0000] Pulling image localhost/nixos:bare (policy: missing) 
DEBU[0000] Looking up image "localhost/nixos:bare" in local containers storage 
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0000] Trying "localhost/nixos:bare" ...            
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@62832ec56bb48db6d808ee316ae3e875fea17d32fe3c25d5ce3036a2694eea61" 
DEBU[0000] Found image "localhost/nixos:bare" as "localhost/nixos:bare" in local containers storage 
DEBU[0000] Found image "localhost/nixos:bare" as "localhost/nixos:bare" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@62832ec56bb48db6d808ee316ae3e875fea17d32fe3c25d5ce3036a2694eea61) 
DEBU[0000] Looking up image "localhost/nixos:bare" in local containers storage 
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0000] Trying "localhost/nixos:bare" ...            
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@62832ec56bb48db6d808ee316ae3e875fea17d32fe3c25d5ce3036a2694eea61" 
DEBU[0000] Found image "localhost/nixos:bare" as "localhost/nixos:bare" in local containers storage 
DEBU[0000] Found image "localhost/nixos:bare" as "localhost/nixos:bare" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@62832ec56bb48db6d808ee316ae3e875fea17d32fe3c25d5ce3036a2694eea61) 
DEBU[0000] Looking up image "localhost/nixos:bare" in local containers storage 
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0000] Trying "localhost/nixos:bare" ...            
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@62832ec56bb48db6d808ee316ae3e875fea17d32fe3c25d5ce3036a2694eea61" 
DEBU[0000] Found image "localhost/nixos:bare" as "localhost/nixos:bare" in local containers storage 
DEBU[0000] Found image "localhost/nixos:bare" as "localhost/nixos:bare" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@62832ec56bb48db6d808ee316ae3e875fea17d32fe3c25d5ce3036a2694eea61) 
DEBU[0000] Inspecting image 62832ec56bb48db6d808ee316ae3e875fea17d32fe3c25d5ce3036a2694eea61 
DEBU[0000] exporting opaque data as blob "sha256:62832ec56bb48db6d808ee316ae3e875fea17d32fe3c25d5ce3036a2694eea61" 
DEBU[0000] exporting opaque data as blob "sha256:62832ec56bb48db6d808ee316ae3e875fea17d32fe3c25d5ce3036a2694eea61" 
DEBU[0000] Inspecting image 62832ec56bb48db6d808ee316ae3e875fea17d32fe3c25d5ce3036a2694eea61 
DEBU[0000] Inspecting image 62832ec56bb48db6d808ee316ae3e875fea17d32fe3c25d5ce3036a2694eea61 
DEBU[0000] Inspecting image 62832ec56bb48db6d808ee316ae3e875fea17d32fe3c25d5ce3036a2694eea61 
DEBU[0000] using systemd mode: false                    
DEBU[0000] No hostname set; container's hostname will default to runtime default 
DEBU[0000] Found apparmor_parser binary in /sbin/apparmor_parser 
DEBU[0000] Loading seccomp profile from "/etc/containers/seccomp.json" 
DEBU[0000] Successfully loaded 1 networks               
DEBU[0000] Allocated lock 5 for container d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev]@62832ec56bb48db6d808ee316ae3e875fea17d32fe3c25d5ce3036a2694eea61" 
DEBU[0000] exporting opaque data as blob "sha256:62832ec56bb48db6d808ee316ae3e875fea17d32fe3c25d5ce3036a2694eea61" 
DEBU[0000] Cached value indicated that overlay is not supported 
DEBU[0000] Check for idmapped mounts support            
DEBU[0000] Created container "d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218" 
DEBU[0000] Container "d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218" has work directory "/var/lib/containers/storage/overlay-containers/d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218/userdata" 
DEBU[0000] Container "d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218" has run directory "/run/containers/storage/overlay-containers/d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218/userdata" 
DEBU[0000] Not attaching to stdin                       
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that metacopy is being used 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true 
DEBU[0000] Made network namespace at /run/netns/netns-b27c3451-9a77-e1bc-492f-5e96a9e49c85 for container d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218 
DEBU[0000] overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/RGGRF6EBK5N23RFL3RHONBVBAN,upperdir=/var/lib/containers/storage/overlay/54f9419f1000f9bb6ec78f7b5756ca596ca7e2028f5a35a549a86b059b8d27ad/diff,workdir=/var/lib/containers/storage/overlay/54f9419f1000f9bb6ec78f7b5756ca596ca7e2028f5a35a549a86b059b8d27ad/work,nodev 
DEBU[0000] Mounted container "d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218" at "/var/lib/containers/storage/overlay/54f9419f1000f9bb6ec78f7b5756ca596ca7e2028f5a35a549a86b059b8d27ad/merged" 
DEBU[0000] Created root filesystem for container d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218 at /var/lib/containers/storage/overlay/54f9419f1000f9bb6ec78f7b5756ca596ca7e2028f5a35a549a86b059b8d27ad/merged 
[DEBUG netavark::network::validation] "Validating network namespace..."
[DEBUG netavark::commands::setup] "Setting up..."
[INFO  netavark::firewall] Using iptables firewall driver
[DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1
[DEBUG netavark::commands::setup] Setting up network podman with driver bridge
[DEBUG netavark::network::core] Container veth name: "eth0"
[DEBUG netavark::network::core] Brige name: "podman0"
[DEBUG netavark::network::core] IP address for veth vector: [10.88.0.7/16]
[DEBUG netavark::network::core] Gateway ip address vector: [10.88.0.1/16]
[DEBUG netavark::network::core] Configured static up address for eth0
[DEBUG netavark::network::core] Container veth mac: "9a:cb:ad:b2:9f:66"
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-1D8721804F16F created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD exists on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD exists on table filter
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.88.0.0/16 -j ACCEPT exists on table nat and chain NETAVARK-1D8721804F16F
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.88.0.0/16 -j ACCEPT created on table nat and chain NETAVARK-1D8721804F16F
[DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE exists on table nat and chain NETAVARK-1D8721804F16F
[DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE created on table nat and chain NETAVARK-1D8721804F16F
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.88.0.0/16 -j NETAVARK-1D8721804F16F exists on table nat and chain POSTROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.88.0.0/16 -j NETAVARK-1D8721804F16F created on table nat and chain POSTROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.88.0.0/16 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT exists on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.88.0.0/16 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.88.0.0/16 -j ACCEPT exists on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.88.0.0/16 -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK exists on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK exists on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ exists on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ exists on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT exists on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT exists on table nat
[INFO  netavark::commands::setup] dns disabled because aardvark-dns path does not exists
[DEBUG netavark::commands::setup] {
        "podman": StatusBlock {
            dns_search_domains: Some(
                [],
            ),
            dns_server_ips: Some(
                [],
            ),
            interfaces: Some(
                {
                    "eth0": NetInterface {
                        mac_address: "9a:cb:ad:b2:9f:66",
                        subnets: Some(
                            [
                                NetAddress {
                                    gateway: Some(
                                        10.88.0.1,
                                    ),
                                    ipnet: 10.88.0.7/16,
                                },
                            ],
                        ),
                    },
                },
            ),
        },
    }
[DEBUG netavark::commands::setup] "Setup complete"
INFO[0000] AppAmor profile "containers-default-0.48.0" is already loaded 
DEBU[0000] Adding nameserver(s) from network status of '[]' 
DEBU[0000] Adding search domain(s) from network status of '[]' 
DEBU[0000] Skipping unrecognized mount in /etc/containers/mounts.conf: "# Configuration file for default mounts in containers (see man 5" 
DEBU[0000] Skipping unrecognized mount in /etc/containers/mounts.conf: "# containers-mounts.conf for further information)" 
DEBU[0000] Skipping unrecognized mount in /etc/containers/mounts.conf: "" 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription 
DEBU[0000] Setting Cgroups for container d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218 to machine.slice:libpod:d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] Workdir "/" resolved to host path "/var/lib/containers/storage/overlay/54f9419f1000f9bb6ec78f7b5756ca596ca7e2028f5a35a549a86b059b8d27ad/merged" 
DEBU[0000] Created OCI spec for container d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218 at /var/lib/containers/storage/overlay-containers/d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218 -u d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218/userdata -p /run/containers/storage/overlay-containers/d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218/userdata/pidfile -n charming_liskov --exit-dir /run/libpod/exits --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/containers/storage/overlay-containers/d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218]"
INFO[0000] Running conmon under slice machine.slice and unitName libpod-conmon-d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218.scope 
DEBU[0000] Received: 2254839                            
INFO[0000] Got Conmon PID as 2254814                    
DEBU[0000] Created container d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218 in OCI runtime 
DEBU[0000] Attaching to container d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218 
DEBU[0000] Starting container d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218 with command [/init] 
DEBU[0000] Started container d88823a14949684262f76f58161e1ca652a2ca75965951d9bb8fc3688305f218 

<<< NixOS Stage 2 >>>

INFO[0000] Received shutdown.Stop(), terminating!        PID=2254663
DEBU[0000] Enabling signal proxying                     
mount: /nix/store: bind /nix/store failed.
       dmesg(1) may have more information after failed mount system call.
mount: /nix/store: cannot mount (null) read-only.
       dmesg(1) may have more information after failed mount system call.
running activation script...
mount: /dev: cannot remount devtmpfs read-write, is write-protected.
       dmesg(1) may have more information after failed mount system call.
mount: /dev/pts: cannot remount devpts read-write, is write-protected.
       dmesg(1) may have more information after failed mount system call.
mount: /dev/shm: cannot remount tmpfs read-write, is write-protected.
       dmesg(1) may have more information after failed mount system call.
mount: /proc: cannot remount proc read-write, is write-protected.
       dmesg(1) may have more information after failed mount system call.
mount: /run: cannot mount tmpfs read-only.
       dmesg(1) may have more information after failed mount system call.
mount: /run/keys: cannot mount ramfs read-only.
       dmesg(1) may have more information after failed mount system call.
mount: /run/wrappers: cannot mount tmpfs read-only.
       dmesg(1) may have more information after failed mount system call.
Activation script snippet 'specialfs' failed (32)
setting up /etc...
Warning: something's wrong at /nix/store/cz6na7w751iv7z78fb9ms8hhvnsd0l8z-setup-etc.pl line 120.
Warning: something's wrong at /nix/store/cz6na7w751iv7z78fb9ms8hhvnsd0l8z-setup-etc.pl line 120.
hostname: you don't have permission to set the host name
Activation script snippet 'hostname' failed (1)
/nix/store/bc3932r5i40jlphnh08j1sgdpvxv8nkn-local-cmds: line 23: /run/systemd/container: No such file or directory
unpacking the NixOS/Nixpkgs sources...
error: cannot set host name: Operation not permitted
(use '--show-trace' to show detailed location information)
ln: failed to create symbolic link '/root/.nix-defexpr/channels': File exists
starting systemd...
Failed to set RLIMIT_CORE: Operation not permitted
DEBU[0000] Called run.PersistentPostRunE(podman run --runtime runc --systemd=true --log-level=debug localhost/nixos:bare /init) 

podman version

Client:       Podman Engine
Version:      4.1.1
API Version:  4.1.1
Go Version:   go1.19
Git Commit:   f73d8f8875c2be7cd2049094c29aff90b1150241-dirty
Built:        Wed Aug  3 15:52:48 2022
OS/Arch:      linux/amd64
tv42 commented 1 year ago

Using podman run --privileged helped. I don't like that, though... I wish there was a way to build an image with nixos, and run it much like "normal" podman/docker containers -- just the one process I want, no systemd.

liamdiprose commented 1 year ago

@tv42 nixos-generators is for building entire NixOS operating systems, which are based off systemd.

You might want to check out https://nix.dev/tutorials/building-and-running-docker-images.

usmcamp0811 commented 12 months ago

@liamdiprose if it builds an entire OS what's the use case for the containers built with nixos-generators? Is it meant to build a base image?

liamdiprose commented 12 months ago

I think so. From the nixos-generators Readme:

The nixos-generators project allows to take the same NixOS configuration, and generate outputs for different target formats. Just put your stuff into the configuration.nix and then call one of the image builders.

The last time I put NixOS inside docker was to test and gradually replace the preconfigured image the cloud provider supplied before persisting with their devops-dance needed to create an image and instance.

drupol commented 3 months ago

I'm looking for a way to run a privileged podman container.

Using: podman run --privileged <name> /init seems to work, but I wish I could have a better way to run them.

Also, to enter the container, I do: podman exec -it <name> /bin/sh and that works, but I can't run anything in there, I have access to not any single binary.

Do you have some recommendations?