Closed dhess closed 2 years ago
In my testing, QEMU through Rosetta2 is at best, even slower.
QEMU through Rosetta2 appears to only have one option for acceleration: TCG (Intel HAXM is not provided on M1, HVF does not work via Rosetta2), and the overhead of TCG's JIT on top of Rosetta2's JIT results in (what feels like) at least 2-3x worse performance.
There doesn't appear to be a way to disable acceleration altogether, even if you could I doubt it'd run better.
In practice, I've found headless VMs run tolerably via UTM, graphical VMs were basically unusable until recently, akihikodaki has done some patchwork to get some amount of graphics accel going, I believe some of it's in UTM now, and waiting to get upstreamed to QEMU.
(fwiw, if you want to try qemu through rosetta2, or any other package, passing this to nix-shell should do)
{pkgs ? import <nixpkgs> {system = "x86_64-darwin";}}:
pkgs.mkShell {
packages = [ pkgs.qemu ];
}
Rosetta2 doesn't offer virtualisation support; you can't use it to virtualise VMs.
Apple's solution in OS13 is to offer a rosetta2 Linux binary that Linux VMs can mount via virtio and register via binfmt to run x86_64 Linux binaries through it.
On
aarch64-darwin
, theqemu-system-x86_64
binary fromqemu
is an Apple Silicon binary, so presumably it's doing its own binary translation rather than using Apple's Rosetta 2.Would it be feasible to build
qemu-system-x86_64
for Intel when theqemu
package target platform isaarch64-darwin
?(I suspect it might not work with the Hypervisor framework, but it might be worth it to drop Hypervisor framework support for the user-space performance boost. At the moment, running
x86_64-linux
binaries onaarch64-darwin
viaqemu
is painfully slow.)