When using microvm.vms.<name>.flake to specify a microVM's config instead of declaring it inline with vms.<name>.config, the guest flake is the one defining options like vcpu, shares and devices instead of the host. This makes it very non-portable and tied to a specific host system.
For example, if I wanted to create a generic microVM flake that runs jellyfin, I'd have to hardcode the number of VCPUs and memory that gets allocated to it, the hypervisor used, the exact host path it uses for media, etc. and using a PCIe passthrough GPU would be impossible.
Moving the guest-side microvm.* options to the host's microvm.vms.<name>.* next to the flake path would fix this. Specifying everything on the host side might also allow for running arbitrary flakes as microVMs, which could also make it possible to e.g. use the generic jellyfin flake from my example with something like Terraform as a standalone VM on non-NixOS systems.
When using
microvm.vms.<name>.flake
to specify a microVM's config instead of declaring it inline withvms.<name>.config
, the guest flake is the one defining options likevcpu
,shares
anddevices
instead of the host. This makes it very non-portable and tied to a specific host system.For example, if I wanted to create a generic microVM flake that runs jellyfin, I'd have to hardcode the number of VCPUs and memory that gets allocated to it, the hypervisor used, the exact host path it uses for media, etc. and using a PCIe passthrough GPU would be impossible.
Moving the guest-side
microvm.*
options to the host'smicrovm.vms.<name>.*
next to the flake path would fix this. Specifying everything on the host side might also allow for running arbitrary flakes as microVMs, which could also make it possible to e.g. use the generic jellyfin flake from my example with something like Terraform as a standalone VM on non-NixOS systems.