vm.nix
nixos module in the current working directory.$HOME
and the user's nix profile into the virtual machineExample vm.nix
:
{ pkgs, ... }: {
boot.kernelPackages = pkgs.linuxPackages_latest;
}
nixos-shell
is available in nixpkgs.
To start a vm use:
$ nixos-shell
In this case nixos-shell
will read vm.nix
in the current directory.
Instead of vm.nix
, nixos-shell
also accepts other modules on the command line.
$ nixos-shell some-nix-module.nix
You can also start a vm from a flake's nixosConfigurations
or nixosModules
output using the --flake
flag.
$ nixos-shell --flake github:Mic92/nixos-shell#vm-forward
This will run the vm-forward
example.
Note:
nixos-shell
must be able to extend the specified system configuration with certain modules.If your version of
nixpkgs
provides theextendModules
function on system configurations,nixos-shell
will use it to inject the required modules; no additional work on your part is needed.If your version of
nixpkgs
does not provideextendModules
, you must make your system configurations overridable withlib.makeOverridable
to use them withnixos-shell
:{ nixosConfigurations = let lib = nixpkgs.lib; in { vm = lib.makeOverridable lib.nixosSystem { # ... }; }; }
Specifying a non-overridable system configuration will cause
nixos-shell
to abort with a non-zero exit status.
When using the --flake
flag, if no attribute is given, nixos-shell
tries the following flake output attributes:
packages.<system>.nixosConfigurations.<vm>
nixosConfigurations.<vm>
nixosModules.<vm>
If an attribute name is given, nixos-shell
tries the following flake output attributes:
packages.<system>.nixosConfigurations.<name>
nixosConfigurations.<name>
nixosModules.<name>
Type Ctrl-a x
to exit the virtual machine.
You can also run the poweroff
command in the virtual machine console:
$vm> poweroff
Or switch to qemu console with Ctrl-a c
and type:
(qemu) quit
To forward ports from the virtual machine to the host, use the
virtualisation.forwardPorts
NixOS option.
See examples/vm-forward.nix
where the ssh server running on port 22 in the
virtual machine is made accessible through port 2222 on the host.
The same can be also achieved by using the QEMU_NET_OPTS
environment variable.
$ QEMU_NET_OPTS="hostfwd=tcp::2222-:22" nixos-shell
Your keys are used to enable passwordless login for the root user.
At the moment only ~/.ssh/id_rsa.pub
, ~/.ssh/id_ecdsa.pub
and ~/.ssh/id_ed25519.pub
are
added automatically. Use users.users.root.openssh.authorizedKeys.keyFiles
to add more.
Note: sshd is not started by default. It can be enabled by setting
services.openssh.enable = true
.
QEMU is started with user mode network by default. To use bridge network instead,
set virtualisation.qemu.networkingOptions
to something like
[ "-nic bridge,br=br0,model=virtio-net-pci,mac=11:11:11:11:11:11,helper=/run/wrappers/bin/qemu-bridge-helper" ]
. /run/wrappers/bin/qemu-bridge-helper
is a NixOS specific
path for qemu-bridge-helper on other Linux distributions it will be different.
QEMU needs to be installed on the host to get qemu-bridge-helper
with setuid bit
set - otherwise you will need to start VM as root. On NixOS this can be achieved using
virtualisation.libvirtd.enable = true;
By default qemu will allow at most 500MB of RAM, this can be increased using virtualisation.memorySize
(size in megabyte).
{ virtualisation.memorySize = 1024; }
To increase the CPU count use virtualisation.cores
(defaults to 1):
{ virtualisation.cores = 2; }
To increase the size of the virtual hard drive, i. e. to 20 GB (see virtualisation options at bottom, defaults to 512M):
{ virtualisation.diskSize = 20 * 1024; }
Notice that for this option to become effective you may also need to delete previous block device files created by qemu (nixos.qcow2
).
Notice that changes in the nix store are written to an overlayfs backed by tmpfs rather than the block device
that is configured by virtualisation.diskSize
. This tmpfs can be disabled however by using:
{ virtualisation.writableStoreUseTmpfs = false; }
This option is recommend if you plan to use nixos-shell as a remote builder.
To use graphical applications, add the virtualisation.graphics
NixOS option (see examples/vm-graphics.nix
).
By default for user's convenience nixos-shell
does not enable a firewall.
This can be overridden by:
{ networking.firewall.enable = true; }
There does not exists any explicit options right now but
one can use either the $QEMU_OPTS
environment variable
or set virtualisation.qemu.options
to pass the right qemu
command line flags:
{
# /dev/sdc also needs to be read-writable by the user executing nixos-shell
virtualisation.qemu.options = [ "-hdc" "/dev/sdc" ];
}
{ virtualisation.qemu.options = [ "-bios" "${pkgs.OVMF.fd}/FV/OVMF.fd" ]; }
To mount anywhere inside the virtual machine, use the nixos-shell.mounts.extraMounts
option.
{
nixos-shell.mounts.extraMounts = {
# simple USB stick sharing
"/media" = /media;
# override options for each mount
"/var/www" = {
target = ./src;
cache = "none";
};
};
}
You can further configure the default mount settings:
{
nixos-shell.mounts = {
mountHome = false;
mountNixProfile = false;
cache = "none"; # default is "loose"
};
}
Available cache modes are documented in the 9p kernel module.
In many cloud environments KVM is not available and therefore nixos-shell will fail with:
CPU model 'host' requires KVM
.
In newer versions of nixpkgs this has been fixed by falling back to emulation.
In older version one can set the virtualisation.qemu.options
or set the environment variable QEMU_OPTS
:
export QEMU_OPTS="-cpu max"
nixos-shell
A full list of supported qemu cpus can be obtained by running qemu-kvm -cpu help
.
By default VMs will have a NIX_PATH configured for nix channels but no channel are downloaded yet. To avoid having to download a nix-channel every time the VM is reset, you can use the following nixos configuration:
{...}: {
nix.nixPath = [
"nixpkgs=${pkgs.path}"
];
}
This will add the nixpkgs that is used for the VM in the NIX_PATH of login shell.
Instead of using the cli, it's also possible to include the nixos-shell
NixOS module in your own NixOS configuration.
Add this to your flake.nix
:
{
inputs.nixos-shell.url = "github:Mic92/nixos-shell";
}
And this to your nixos configuration defined in your flake:
{
imports = [ inputs.nixos-shell.nixosModules.nixos-shell ];
}
Afterwards you can start your nixos configuration with nixos-shell with one of the two following variants:
For the pure version (doesn't set SHELL or mount /home):
nix run .#nixosConfigurations.<yourmachine>.config.system.build.nixos-shell
Or for a version closer to nixos-shell
:
nix run .#nixosConfigurations.<yourmachine>.config.system.build.nixos-shell
It's possible to specify a different architecture using --guest-system
.
This requires your host system to have a either a remote builder
(i.e. darwin-builder on macOS)
or beeing able to run builds in emulation
for the guest system (boot.binfmt.emulatedSystems
on NixOS.).
Here is an example for macOS (arm) that will run an aarch64-linux vm:
$ nixos-shell --guest-system aarch64-linux examples/vm.nix
Have a look at the virtualisation options NixOS provides.