canonical / lxd

Powerful system container and virtual machine manager
https://canonical.com/lxd
GNU Affero General Public License v3.0
4.39k stars 932 forks source link

Can't programmatically boot into CSM legacy disk #11908

Closed Dany9966 closed 1 year ago

Dany9966 commented 1 year ago

Required information

  kernel_version: 5.15.0-73-generic
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Ubuntu
  os_version: "22.04"
  project: default
  server: lxd
  server_clustered: false
  server_event_mode: full-mesh
  server_name: cloudbase-lxd
  server_pid: 130597
  server_version: "5.15"
  storage: zfs
  storage_version: 2.1.5-1ubuntu6~22.04.1
  storage_supported_drivers:
  - name: dir
    version: "1"
    remote: false
  - name: lvm
    version: 2.03.11(2) (2021-01-08) / 1.02.175 (2021-01-08) / 4.45.0
    remote: false
  - name: zfs
    version: 2.1.5-1ubuntu6~22.04.1
    remote: false
  - name: btrfs
    version: 5.16.2
    remote: false
  - name: ceph
    version: 17.2.5
    remote: true
  - name: cephfs
    version: 17.2.5
    remote: true
  - name: cephobject
    version: 17.2.5
    remote: true

Issue description

I successfully booted into a legacy disk, but I need to always configure UEFI settings upon VM creation via console. Here's the VM configuration:

architecture: x86_64
config:
  limits.cpu: "6"
  limits.memory: 4096MiB
  security.csm: "true"
  security.secureboot: "false"
  user.user-data: |
    #cloud-config
    users:
      - name: cloudbase
        ssh-authorized-keys:
          - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCh84ov/r/xjOd58tDxx4CGt/WC8gR1Twmb4oP8jLuuFBHbLLHZL5xyVJFZvl+6Ctx29g4lhtWMm43K/m6xqmTk6t2zHRUcskEPBaVdahTvMZ1khb8FWna83FzAWPWlfw74IDx9eOauxrRGawv4NqoPHrEeoQXs53qslep12KRT2Nh4KPACEzcEPOezzNzH6vj9FSpO3cXh6C1BIzfHL2cOLKJ/P4CSFbloaHbZSYrTZvNZmVE6ZVnPr+rpgGmSm+PPHuPQkXvO/w3NOK3buFn46HKEY8m34TXxIIXtgWTnOqS9eZM077ZqoAt7JX/q94z/k6JvwNR9aUt3J8Wn3SmR tmp@migration
        sudo: ['ALL=(ALL) NOPASSWD:ALL']
        groups: sudo
        shell: /bin/bash
  volatile.cloud-init.instance-id: 0f61eac0-5c93-4c9d-8010-741c7dbed366
  volatile.eth0.host_name: tap5fec66b2
  volatile.eth0.hwaddr: 00:16:3e:50:4f:72
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: 82a7d248-a2de-409b-af45-344572873f6f
  volatile.uuid.generation: 82a7d248-a2de-409b-af45-344572873f6f
  volatile.vsock_id: "145"
devices:
  /dev/sdb:
    boot.priority: "1"
    pool: default
    source: a28253e9-81b0-4cca-ae73-2b784833be9b
    type: disk
  eth0:
    boot.priority: "0"
    name: eth0
    network: lxdbr0
    type: nic
  root:
    boot.priority: "2"
    path: /
    pool: default
    size: 11GiB
    type: disk
ephemeral: false
profiles: []
stateful: false
description: ""

The root disk of the VM contains the data of a bootable legacy disk, and /dev/sdb is a data disk. When I start this VM, it tries to boot the UEFI disks (as configured), and then using the NICs' PXE/HTTP. Since I have CSM enabled, the legacy boot options are also added, but unfortunately those are by default put at the bottom of the boot priority list.

We'd need to programmatically boot into the legacy disk . Since CSM is enabled in the VM configuration and disk devices set as top priority, it would be nice if the legacy disks were also set as priority next to the UEFI disk entries.

In my case booting without touching the VM's console, PXE would take over and the VM would never boot into the legacy disk.

tomponline commented 1 year ago

So it sounds like the boot.priority setting is not being applied to the CSM disk devices, or that UEFI devices are being put first.

Dany9966 commented 1 year ago

Yes, exactly. The priority is not being applied to the CSM disk devices

stgraber commented 1 year ago

Unfortunately I'm not aware of any way to do this at the moment. LXD only has control over the qemufw boot priorities which we do set correctly, but there's no way to indicate whether we want the priority to apply to EFI or CSM.

This is a bit of a shortcoming of the EDK2/CSM/seabios integration. I also looked for a build option in EDK2 to flip the default order, basically defaulting to CSM in that particular case, but with no luck.

Closing because this is unfortunately outside of what LXD or apparently even QEMU can control at this point.

tomponline commented 1 year ago

Reopening this as without the ability to make csm boot the default for servers it makes csm not very practical when importing many instances, as each one has to have the boot order manually changed in the UEFI menu.