canonical / lxd

Powerful system container and virtual machine manager
https://canonical.com/lxd
GNU Affero General Public License v3.0
4.32k stars 927 forks source link

"Unable to run feature checks during QEMU initialization: open /tmp/1373261747: no such file or directory" breaks VM use cases #13782

Open cpaelzer opened 1 month ago

cpaelzer commented 1 month ago

Required information

$ snap list --all lxd core20 core22 core24 snapd
Name    Version      Rev    Tracking       Publisher   Notes
core20  20240227     2264   latest/stable  canonical✓  base,disabled
core20  20240416     2318   latest/stable  canonical✓  base
core22  20240111     1122   latest/stable  canonical✓  base,disabled
core22  20240408     1380   latest/stable  canonical✓  base
lxd     6.1-90889b0  29398  latest/stable  canonical✓  disabled
lxd     6.1-0d4d89b  29469  latest/stable  canonical✓  -
snapd   2.62         21465  latest/stable  canonical✓  snapd,disabled
snapd   2.63         21759  latest/stable  canonical✓  snapd
lxc info
config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- backup_compression
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- instances_placement_scriptlet
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- storage_api_remote_volume_snapshot_copy
- zfs_delegate
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- disk_initial_volume_configuration
- operation_wait
- cluster_internal_custom_volume_copy
- disk_io_bus
- storage_cephfs_create_missing
- instance_move_config
- ovn_ssl_config
- init_preseed_storage_volumes
- metrics_instances_count
- server_instance_type_info
- resources_disk_mounted
- server_version_lts
- oidc_groups_claim
- loki_config_instance
- storage_volatile_uuid
- import_instance_devices
- instances_uefi_vars
- instances_migration_stateful
- container_syscall_filtering_allow_deny_syntax
- access_management
- vm_disk_io_limits
- storage_volumes_all
- instances_files_modify_permissions
- image_restriction_nesting
- container_syscall_intercept_finit_module
- device_usb_serial
- network_allocate_external_ips
- explicit_trust_token
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
auth_user_name: paelzer
auth_user_method: unix
environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIIB9TCCAXygAwIBAgIRAMYc8ESl2OfbTJSXPUG4uxkwCgYIKoZIzj0EAwMwKjEM
    MAoGA1UEChMDTFhEMRowGAYDVQQDDBFyb290QEtlc2NoZGVpY2hlbDAeFw0yMzA5
    MjYwODIzMzFaFw0zMzA5MjMwODIzMzFaMCoxDDAKBgNVBAoTA0xYRDEaMBgGA1UE
    AwwRcm9vdEBLZXNjaGRlaWNoZWwwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATz+fMB
    6oY7bWtMEZOMdG2GSIGAUJCM+3o1vs7iIVNNC4RTKiOZZFGXgC5g8GVCYNxkFQzw
    sIZwTs6ZzYse6VbHURm2791nbV9GB3rx4gt8GdNUCSX9SMHllvZXH4YZ8OijZjBk
    MA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8E
    AjAAMC8GA1UdEQQoMCaCDEtlc2NoZGVpY2hlbIcEfwAAAYcQAAAAAAAAAAAAAAAA
    AAAAATAKBggqhkjOPQQDAwNnADBkAjAGiyYsJj61S7qcvxgxAQOq/DdB4p21zFdb
    VvwBZ8N4ZYLs7UKX9Q5Lko47TQA+cUUCMHRNUu1xdBN/n4EhP6v3PqfWbGimdoCp
    sn/ree/oWiUdIpKu3v34lr4enZ2lhrPJ8w==
    -----END CERTIFICATE-----
  certificate_fingerprint: d229b65230ecd065e500728ad52c64f50a8a89c37987a998f1ba42d50dca3827
  driver: lxc
  driver_version: 6.0.0
  instance_types:
  - container
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    uevent_injection: "true"
    unpriv_fscaps: "false"
  kernel_version: 6.8.0-31-generic
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Ubuntu
  os_version: "24.04"
  project: default
  server: lxd
  server_clustered: false
  server_event_mode: full-mesh
  server_name: Keschdeichel
  server_pid: 2713330
  server_version: "6.1"
  server_lts: false
  storage: zfs
  storage_version: 2.2.2-0ubuntu9
  storage_supported_drivers:
  - name: lvm
    version: 2.03.11(2) (2021-01-08) / 1.02.175 (2021-01-08) / 4.48.0
    remote: false
  - name: powerflex
    version: 1.16 (nvme-cli)
    remote: true
  - name: zfs
    version: 2.2.2-0ubuntu9
    remote: false
  - name: btrfs
    version: 5.16.2
    remote: false
  - name: ceph
    version: 17.2.7
    remote: true
  - name: cephfs
    version: 17.2.7
    remote: true
  - name: cephobject
    version: 17.2.7
    remote: true
  - name: dir
    version: "1"
    remote: false

Issue description

Starting a VM does no more work, what I get is this

lxc launch ubuntu-daily:j j-vm --ephemeral --vm
Creating j-vm
Error: Failed instance creation: Failed creating instance record: Instance type "virtual-machine" is not supported on this server: QEMU failed to run feature checks

This has worked throughout the week and before multiple times, but it stopped working "out of a sudden". Same kernel and environment used to work for almost two months

$ uptime 
 10:07:42 up 58 days,  1:56,  1 user,  load average: 1,95, 2,74, 3,05

Steps to reproduce

  1. Start a VM like lxc launch ubuntu-daily:j j-vm --ephemeral --vm

  2. I actually found that launching a VM will only re-show the message But not re-probe and re-trigger the underlying issue. But runnung sudo systemctl restart snap.lxd.daemon will make it re-probe and re-fail according to the logs below.

Information to attach

I've done some debugging to try to help you to help me :-)

My first suspicion is an unattended snap upgrade, and indeed I see:

snap info lxd
...
refresh-date: yesterday at 14:06 CEST

Apt upgrade should not influence it much, but for completeness the list of what happened there in the last two days since it was still working.

$ grep "upgrade" /var/log/dpkg.log
...
2024-07-17 06:53:04 upgrade ghostscript:amd64 10.02.1~dfsg1-0ubuntu7.1 10.02.1~dfsg1-0ubuntu7.3
2024-07-17 06:53:04 upgrade libgs10:amd64 10.02.1~dfsg1-0ubuntu7.1 10.02.1~dfsg1-0ubuntu7.3
2024-07-17 06:53:05 upgrade libgs10-common:all 10.02.1~dfsg1-0ubuntu7.1 10.02.1~dfsg1-0ubuntu7.3
2024-07-17 06:53:05 upgrade libgs-common:all 10.02.1~dfsg1-0ubuntu7.1 10.02.1~dfsg1-0ubuntu7.3
2024-07-18 06:45:51 upgrade libgtk2.0-common:all 2.24.33-4ubuntu1 2.24.33-4ubuntu1.1
2024-07-18 06:45:51 upgrade libgtk2.0-bin:amd64 2.24.33-4ubuntu1 2.24.33-4ubuntu1.1
2024-07-18 06:45:51 upgrade libgail-common:amd64 2.24.33-4ubuntu1 2.24.33-4ubuntu1.1
2024-07-18 06:45:52 upgrade libgail18t64:amd64 2.24.33-4ubuntu1 2.24.33-4ubuntu1.1
2024-07-18 06:45:52 upgrade libgtk2.0-0t64:amd64 2.24.33-4ubuntu1 2.24.33-4ubuntu1.1
2024-07-18 06:45:52 upgrade gtk2-engines-pixbuf:amd64 2.24.33-4ubuntu1 2.24.33-4ubuntu1.1
2024-07-18 06:45:59 upgrade libgtk-3-common:all 3.24.41-4ubuntu1 3.24.41-4ubuntu1.1
2024-07-18 06:45:59 upgrade libgtk-3-0t64:amd64 3.24.41-4ubuntu1 3.24.41-4ubuntu1.1
2024-07-18 06:46:00 upgrade gir1.2-gtk-3.0:amd64 3.24.41-4ubuntu1 3.24.41-4ubuntu1.1
2024-07-18 06:46:00 upgrade gtk-update-icon-cache:amd64 3.24.41-4ubuntu1 3.24.41-4ubuntu1.1
2024-07-18 06:46:00 upgrade libgtk-3-bin:amd64 3.24.41-4ubuntu1 3.24.41-4ubuntu1.1
2024-07-18 06:46:07 upgrade libsysmetrics1:amd64 1.7.3build2 1.7.3ubuntu0.24.04.1
2024-07-18 06:46:07 upgrade ubuntu-report:amd64 1.7.3build2 1.7.3ubuntu0.24.04.1
2024-07-18 06:46:13 upgrade ipp-usb:amd64 0.9.24-0ubuntu3 0.9.24-0ubuntu3.1
2024-07-18 06:46:19 upgrade hugo:amd64 0.123.7-1build1 0.123.7-1ubuntu0.1

I've found in discussions that you'd usually check for kvm and vsock devices, but that looks good to me.

$ lsmod | grep -e vsock -e kvm
vhost_vsock            24576  0
vmw_vsock_virtio_transport_common    61440  1 vhost_vsock
vhost                  65536  2 vhost_vsock,vhost_net
vsock                  65536  2 vmw_vsock_virtio_transport_common,vhost_vsock
kvm_intel             487424  0
kvm                  1437696  1 kvm_intel
irqbypass              12288  1 kvm
$ ll /dev/vsock /dev/kvm 
crw-rw----+ 1 root kvm  10, 232 Jul 18 08:36 /dev/kvm
crw-rw-rw-  1 root root 10, 121 Mai 24 03:46 /dev/vsock

Furthermore I ran check kernel

$ lxd.check-kernel 
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-6.8.0-31-generic

--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
newuidmap is not installed
newgidmap is not installed
Network namespace: enabled
Namespace limits:
  cgroup: 127544
  ipc: 127544
  mnt: 127544
  net: 127544
  pid: 127544
  time: 127544
  user: 127544
  uts: 127544

--- Control groups ---
Cgroups: enabled
Cgroup namespace: enabled
Cgroup v1 mount points: 
Cgroup v2 mount points: 
 - /sys/fs/cgroup
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled, loaded
Macvlan: enabled, not loaded
Vlan: enabled, not loaded
Bridges: enabled, loaded
Advanced netfilter: enabled, loaded
CONFIG_IP_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled, loaded
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled, loaded
FUSE (for use with lxcfs): enabled, not loaded

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: enabled

To the issue seems more some call to qemu to check capabilities that fails. And below we see it seems to have some trouble with tmp directories.

In the LXD log I see:

$sudo cat  /var/snap/lxd/common/lxd/logs/lxd.log
time="2024-07-17T14:06:10+02:00" level=warning msg=" - Couldn't find the CGroup network priority controller, per-instance network priority will be ignored. Please use per-device limits.priority instead"
time="2024-07-17T14:06:10+02:00" level=error msg="Unable to run feature checks during QEMU initialization: open /tmp/4115688742: no such file or directory"
time="2024-07-17T14:06:10+02:00" level=warning msg="Instance type not operational" driver=qemu err="QEMU failed to run feature checks" type=virtual-machine

Dmesg at the time shows nothing - as it bails out before trying to start. But restarting LXD will trigger a re-probing causing the same again.

$ sudo systemctl restart snap.lxd.daemon
# Tail was running, the restart reset the file
tail: /var/snap/lxd/common/lxd/logs/lxd.log: file truncated
time="2024-07-18T09:40:31+02:00" level=warning msg=" - Couldn't find the CGroup network priority controller, per-instance network priority will be ignored. Please use per-device limits.priority instead"
time="2024-07-18T09:40:31+02:00" level=error msg="Unable to run feature checks during QEMU initialization: open /tmp/1373261747: no such file or directory"
time="2024-07-18T09:40:31+02:00" level=warning msg="Instance type not operational" driver=qemu err="QEMU failed to run feature checks" type=virtual-machine

While doing that I have dmesg that seems related to LXD; but no e.g. new apparmor denial. Most of the output is the recycling of the containers that I have still up.

[5017121.116568] physeAfqIg: renamed from eth0
[5017121.121410] lxdbr0: port 2(vethaecf1ff2) entered disabled state
[5017121.126701] veth3984f24c: renamed from physeAfqIg
[5017121.172070] lxdbr0: port 2(vethaecf1ff2) entered blocking state
[5017121.172079] lxdbr0: port 2(vethaecf1ff2) entered forwarding state
[5017121.226604] vethaecf1ff2: left allmulticast mode
[5017121.226609] vethaecf1ff2: left promiscuous mode
[5017121.226654] lxdbr0: port 2(vethaecf1ff2) entered disabled state
[5017121.298587] physwpqV2Q: renamed from eth0
[5017121.305815] lxdbr0: port 3(vethc9859315) entered disabled state
[5017121.311470] veth1e83e7c6: renamed from physwpqV2Q
[5017121.357972] lxdbr0: port 3(vethc9859315) entered blocking state
[5017121.357980] lxdbr0: port 3(vethc9859315) entered forwarding state
[5017121.385286] physgMjCCT: renamed from eth0
[5017121.389162] lxdbr0: port 1(veth02525087) entered disabled state
[5017121.397347] veth9fcd538c: renamed from physgMjCCT
[5017121.431657] vethc9859315: left allmulticast mode
[5017121.431664] vethc9859315: left promiscuous mode
[5017121.431712] lxdbr0: port 3(vethc9859315) entered disabled state
[5017121.473388] lxdbr0: port 1(veth02525087) entered blocking state
[5017121.473402] lxdbr0: port 1(veth02525087) entered forwarding state
[5017121.523516] veth02525087: left allmulticast mode
[5017121.523524] veth02525087: left promiscuous mode
[5017121.523580] lxdbr0: port 1(veth02525087) entered disabled state
[5017121.606642] virbr0-nic: left allmulticast mode
[5017121.606649] virbr0-nic: left promiscuous mode
[5017121.606673] virbr0: port 1(virbr0-nic) entered disabled state
[5017122.064286] audit: type=1400 audit(1721289015.586:6346): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd-d-sid_</var/snap/lxd/common/lxd>" pid=2736614 comm="apparmor_parser"
[5017122.230434] audit: type=1400 audit(1721289015.751:6347): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd-d12_</var/snap/lxd/common/lxd>" pid=2736692 comm="apparmor_parser"
[5017122.414074] audit: type=1400 audit(1721289015.936:6348): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd-b_</var/snap/lxd/common/lxd>" pid=2736805 comm="apparmor_parser"
[5017122.954458] phys8HsYaD: renamed from eth0
[5017122.961329] lxdbr0: port 6(veth3078af84) entered disabled state
[5017122.968606] veth06d62d17: renamed from phys8HsYaD
[5017123.008300] lxdbr0: port 6(veth3078af84) entered blocking state
[5017123.008307] lxdbr0: port 6(veth3078af84) entered forwarding state
[5017123.064084] veth3078af84: left allmulticast mode
[5017123.064090] veth3078af84: left promiscuous mode
[5017123.064143] lxdbr0: port 6(veth3078af84) entered disabled state
[5017123.146264] physMy1K5N: renamed from eth0
[5017123.156157] lxdbr0: port 7(veth661dca17) entered disabled state
[5017123.166575] vethf1a076c4: renamed from physMy1K5N
[5017123.221505] lxdbr0: port 7(veth661dca17) entered blocking state
[5017123.221513] lxdbr0: port 7(veth661dca17) entered forwarding state
[5017123.274100] veth661dca17: left allmulticast mode
[5017123.274106] veth661dca17: left promiscuous mode
[5017123.274153] lxdbr0: port 7(veth661dca17) entered disabled state
[5017123.984844] audit: type=1400 audit(1721289017.506:6349): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd-n_</var/snap/lxd/common/lxd>" pid=2738390 comm="apparmor_parser"
[5017124.036364] physpDQM1u: renamed from eth0
[5017124.040159] lxdbr0: port 4(veth8399d4aa) entered disabled state
[5017124.045521] veth27a99f85: renamed from physpDQM1u
[5017124.062382] lxdbr0: port 4(veth8399d4aa) entered blocking state
[5017124.062389] lxdbr0: port 4(veth8399d4aa) entered forwarding state
[5017124.114815] audit: type=1400 audit(1721289017.636:6350): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd-o_</var/snap/lxd/common/lxd>" pid=2738404 comm="apparmor_parser"
[5017124.124730] veth8399d4aa: left allmulticast mode
[5017124.124735] veth8399d4aa: left promiscuous mode
[5017124.124769] lxdbr0: port 4(veth8399d4aa) entered disabled state
[5017124.902319] audit: type=1400 audit(1721289018.424:6351): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd-f_</var/snap/lxd/common/lxd>" pid=2738568 comm="apparmor_parser"
[5017125.261956] physfqsUcf: renamed from eth0
[5017125.267818] lxdbr0: port 5(veth05711aac) entered disabled state
[5017125.281596] vethd74a94ed: renamed from physfqsUcf
[5017125.318506] lxdbr0: port 5(veth05711aac) entered blocking state
[5017125.318512] lxdbr0: port 5(veth05711aac) entered forwarding state
[5017125.372011] veth05711aac: left allmulticast mode
[5017125.372017] veth05711aac: left promiscuous mode
[5017125.372048] lxdbr0: port 5(veth05711aac) entered disabled state
[5017126.177859] audit: type=1400 audit(1721289019.699:6352): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd-j_</var/snap/lxd/common/lxd>" pid=2738658 comm="apparmor_parser"
[5017126.487113] audit: type=1400 audit(1721289020.009:6353): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd_dnsmasq-lxdbr0_</var/snap/lxd/common/lxd>" pid=2738676 comm="apparmor_parser"
[5017130.502906] audit: type=1400 audit(1721289024.025:6354): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="1password" pid=2739097 comm="apparmor_parser"
[5017130.502924] audit: type=1400 audit(1721289024.025:6355): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="QtWebEngineProcess" pid=2739100 comm="apparmor_parser"
[5017130.503619] audit: type=1400 audit(1721289024.025:6356): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="Discord" pid=2739098 comm="apparmor_parser"
[5017130.506891] audit: type=1400 audit(1721289024.028:6357): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name=4D6F6E676F444220436F6D70617373 pid=2739099 comm="apparmor_parser"
[5017130.508264] audit: type=1400 audit(1721289024.030:6358): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="buildah" pid=2739107 comm="apparmor_parser"
[5017130.509043] audit: type=1400 audit(1721289024.031:6359): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="busybox" pid=2739108 comm="apparmor_parser"
[5017130.515377] audit: type=1400 audit(1721289024.037:6360): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="cam" pid=2739109 comm="apparmor_parser"
[5017130.515571] audit: type=1400 audit(1721289024.037:6361): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="ch-run" pid=2739111 comm="apparmor_parser"
[5017130.517280] audit: type=1400 audit(1721289024.039:6362): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="brave" pid=2739103 comm="apparmor_parser"
[5017130.518039] audit: type=1400 audit(1721289024.040:6363): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="chrome" pid=2739116 comm="apparmor_parser"
[5017132.092784] lxdbr0: port 1(veth03cf2f16) entered blocking state
[5017132.092790] lxdbr0: port 1(veth03cf2f16) entered disabled state
[5017132.094242] veth03cf2f16: entered allmulticast mode
[5017132.094721] veth03cf2f16: entered promiscuous mode
[5017132.232209] phys6ajK83: renamed from veth5e00c9e2
[5017132.237677] eth0: renamed from phys6ajK83
[5017132.242146] lxdbr0: port 1(veth03cf2f16) entered blocking state
[5017132.242152] lxdbr0: port 1(veth03cf2f16) entered forwarding state
[5017132.377823] lxdbr0: port 2(veth83cbd418) entered blocking state
[5017132.377829] lxdbr0: port 2(veth83cbd418) entered disabled state
[5017132.377949] veth83cbd418: entered allmulticast mode
[5017132.378044] veth83cbd418: entered promiscuous mode
[5017132.522051] physjR0o2Q: renamed from vethf311c8b0
[5017132.530203] eth0: renamed from physjR0o2Q
[5017132.534393] lxdbr0: port 2(veth83cbd418) entered blocking state
[5017132.534398] lxdbr0: port 2(veth83cbd418) entered forwarding state
[5017132.720697] lxdbr0: port 3(veth5f54f484) entered blocking state
[5017132.720703] lxdbr0: port 3(veth5f54f484) entered disabled state
[5017132.720722] veth5f54f484: entered allmulticast mode
[5017132.720954] veth5f54f484: entered promiscuous mode
[5017132.924220] physWIxzSF: renamed from veth9bfd51a8
[5017132.932744] eth0: renamed from physWIxzSF
[5017132.938656] lxdbr0: port 3(veth5f54f484) entered blocking state
[5017132.938663] lxdbr0: port 3(veth5f54f484) entered forwarding state
[5017133.127609] lxdbr0: port 4(veth66d3b248) entered blocking state
[5017133.127614] lxdbr0: port 4(veth66d3b248) entered disabled state
[5017133.127630] veth66d3b248: entered allmulticast mode
[5017133.134323] veth66d3b248: entered promiscuous mode
[5017133.352960] physSW5moP: renamed from veth370f1ee3
[5017133.364510] eth0: renamed from physSW5moP
[5017133.369570] lxdbr0: port 4(veth66d3b248) entered blocking state
[5017133.369575] lxdbr0: port 4(veth66d3b248) entered forwarding state
[5017133.586258] lxdbr0: port 5(veth9cf09230) entered blocking state
[5017133.586263] lxdbr0: port 5(veth9cf09230) entered disabled state
[5017133.586291] veth9cf09230: entered allmulticast mode
[5017133.586348] veth9cf09230: entered promiscuous mode
[5017133.864161] physRVtAL5: renamed from vethc60bfe47
[5017133.873181] eth0: renamed from physRVtAL5
[5017133.879913] lxdbr0: port 5(veth9cf09230) entered blocking state
[5017133.879920] lxdbr0: port 5(veth9cf09230) entered forwarding state
[5017134.117746] lxdbr0: port 6(veth28e345d3) entered blocking state
[5017134.117751] lxdbr0: port 6(veth28e345d3) entered disabled state
[5017134.117766] veth28e345d3: entered allmulticast mode
[5017134.121906] veth28e345d3: entered promiscuous mode
[5017134.416214] physoaOD6Z: renamed from veth61faf975
[5017134.422104] eth0: renamed from physoaOD6Z
[5017134.428286] lxdbr0: port 6(veth28e345d3) entered blocking state
[5017134.428292] lxdbr0: port 6(veth28e345d3) entered forwarding state
[5017134.679774] lxdbr0: port 7(veth36000eb4) entered blocking state
[5017134.679780] lxdbr0: port 7(veth36000eb4) entered disabled state
[5017134.679796] veth36000eb4: entered allmulticast mode
[5017134.679889] veth36000eb4: entered promiscuous mode
[5017134.932109] physTICUzN: renamed from vethdbe05b55
[5017134.938979] eth0: renamed from physTICUzN
[5017134.946121] lxdbr0: port 7(veth36000eb4) entered blocking state
[5017134.946126] lxdbr0: port 7(veth36000eb4) entered forwarding state
[5017135.522450] kauditd_printk_skb: 230 callbacks suppressed
[5017135.522453] audit: type=1400 audit(1721289029.044:6594): apparmor="STATUS" operation="profile_load" label="lxd-n_</var/snap/lxd/common/lxd>//&:lxd-n_<var-snap-lxd-common-lxd>:unconfined" name="1password" pid=2741137 comm="apparmor_parser"
[5017135.522824] audit: type=1400 audit(1721289029.045:6595): apparmor="STATUS" operation="profile_load" label="lxd-n_</var/snap/lxd/common/lxd>//&:lxd-n_<var-snap-lxd-common-lxd>:unconfined" name="Discord" pid=2741138 comm="apparmor_parser"
[5017135.528799] audit: type=1400 audit(1721289029.050:6596): apparmor="STATUS" operation="profile_load" label="lxd-n_</var/snap/lxd/common/lxd>//&:lxd-n_<var-snap-lxd-common-lxd>:unconfined" name="QtWebEngineProcess" pid=2741140 comm="apparmor_parser"
[5017135.540701] audit: type=1400 audit(1721289029.062:6597): apparmor="STATUS" operation="profile_load" label="lxd-n_</var/snap/lxd/common/lxd>//&:lxd-n_<var-snap-lxd-common-lxd>:unconfined" name="buildah" pid=2741143 comm="apparmor_parser"
[5017135.542242] audit: type=1400 audit(1721289029.064:6598): apparmor="STATUS" operation="profile_load" label="lxd-n_</var/snap/lxd/common/lxd>//&:lxd-n_<var-snap-lxd-common-lxd>:unconfined" name="brave" pid=2741142 comm="apparmor_parser"
[5017135.542585] audit: type=1400 audit(1721289029.064:6599): apparmor="STATUS" operation="profile_load" label="lxd-n_</var/snap/lxd/common/lxd>//&:lxd-n_<var-snap-lxd-common-lxd>:unconfined" name="balena-etcher" pid=2741141 comm="apparmor_parser"
[5017135.543937] audit: type=1400 audit(1721289029.066:6600): apparmor="STATUS" operation="profile_load" label="lxd-n_</var/snap/lxd/common/lxd>//&:lxd-n_<var-snap-lxd-common-lxd>:unconfined" name=4D6F6E676F444220436F6D70617373 pid=2741139 comm="apparmor_parser"
[5017135.549544] audit: type=1400 audit(1721289029.071:6601): apparmor="STATUS" operation="profile_load" label="lxd-n_</var/snap/lxd/common/lxd>//&:lxd-n_<var-snap-lxd-common-lxd>:unconfined" name="busybox" pid=2741144 comm="apparmor_parser"
[5017135.550469] audit: type=1400 audit(1721289029.072:6602): apparmor="STATUS" operation="profile_load" label="lxd-n_</var/snap/lxd/common/lxd>//&:lxd-n_<var-snap-lxd-common-lxd>:unconfined" name="bwrap" pid=2741145 comm="apparmor_parser"
[5017135.550473] audit: type=1400 audit(1721289029.072:6603): apparmor="STATUS" operation="profile_load" label="lxd-n_</var/snap/lxd/common/lxd>//&:lxd-n_<var-snap-lxd-common-lxd>:unconfined" name="unpriv_bwrap" pid=2741145 comm="apparmor_parser"

lxc info NAME --show-log and lxc config show NAME --expanded do not apply, as the container never gets to exist.

Output of the client with --debug

lxc launch ubuntu-daily:j j-vm --ephemeral --vm --debug
DEBUG  [2024-07-18T09:53:10+02:00] Connecting to a local LXD over a Unix socket 
DEBUG  [2024-07-18T09:53:10+02:00] Sending request to LXD                        etag= method=GET url="http://unix.socket/1.0"
DEBUG  [2024-07-18T09:53:10+02:00] Got response struct from LXD                 
DEBUG  [2024-07-18T09:53:10+02:00] 
    {
        "config": {},
        "api_extensions": [
            "storage_zfs_remove_snapshots",
            "container_host_shutdown_timeout",
            "container_stop_priority",
            "container_syscall_filtering",
            "auth_pki",
            "container_last_used_at",
            "etag",
            "patch",
            "usb_devices",
            "https_allowed_credentials",
            "image_compression_algorithm",
            "directory_manipulation",
            "container_cpu_time",
            "storage_zfs_use_refquota",
            "storage_lvm_mount_options",
            "network",
            "profile_usedby",
            "container_push",
            "container_exec_recording",
            "certificate_update",
            "container_exec_signal_handling",
            "gpu_devices",
            "container_image_properties",
            "migration_progress",
            "id_map",
            "network_firewall_filtering",
            "network_routes",
            "storage",
            "file_delete",
            "file_append",
            "network_dhcp_expiry",
            "storage_lvm_vg_rename",
            "storage_lvm_thinpool_rename",
            "network_vlan",
            "image_create_aliases",
            "container_stateless_copy",
            "container_only_migration",
            "storage_zfs_clone_copy",
            "unix_device_rename",
            "storage_lvm_use_thinpool",
            "storage_rsync_bwlimit",
            "network_vxlan_interface",
            "storage_btrfs_mount_options",
            "entity_description",
            "image_force_refresh",
            "storage_lvm_lv_resizing",
            "id_map_base",
            "file_symlinks",
            "container_push_target",
            "network_vlan_physical",
            "storage_images_delete",
            "container_edit_metadata",
            "container_snapshot_stateful_migration",
            "storage_driver_ceph",
            "storage_ceph_user_name",
            "resource_limits",
            "storage_volatile_initial_source",
            "storage_ceph_force_osd_reuse",
            "storage_block_filesystem_btrfs",
            "resources",
            "kernel_limits",
            "storage_api_volume_rename",
            "network_sriov",
            "console",
            "restrict_devlxd",
            "migration_pre_copy",
            "infiniband",
            "maas_network",
            "devlxd_events",
            "proxy",
            "network_dhcp_gateway",
            "file_get_symlink",
            "network_leases",
            "unix_device_hotplug",
            "storage_api_local_volume_handling",
            "operation_description",
            "clustering",
            "event_lifecycle",
            "storage_api_remote_volume_handling",
            "nvidia_runtime",
            "container_mount_propagation",
            "container_backup",
            "devlxd_images",
            "container_local_cross_pool_handling",
            "proxy_unix",
            "proxy_udp",
            "clustering_join",
            "proxy_tcp_udp_multi_port_handling",
            "network_state",
            "proxy_unix_dac_properties",
            "container_protection_delete",
            "unix_priv_drop",
            "pprof_http",
            "proxy_haproxy_protocol",
            "network_hwaddr",
            "proxy_nat",
            "network_nat_order",
            "container_full",
            "backup_compression",
            "nvidia_runtime_config",
            "storage_api_volume_snapshots",
            "storage_unmapped",
            "projects",
            "network_vxlan_ttl",
            "container_incremental_copy",
            "usb_optional_vendorid",
            "snapshot_scheduling",
            "snapshot_schedule_aliases",
            "container_copy_project",
            "clustering_server_address",
            "clustering_image_replication",
            "container_protection_shift",
            "snapshot_expiry",
            "container_backup_override_pool",
            "snapshot_expiry_creation",
            "network_leases_location",
            "resources_cpu_socket",
            "resources_gpu",
            "resources_numa",
            "kernel_features",
            "id_map_current",
            "event_location",
            "storage_api_remote_volume_snapshots",
            "network_nat_address",
            "container_nic_routes",
            "cluster_internal_copy",
            "seccomp_notify",
            "lxc_features",
            "container_nic_ipvlan",
            "network_vlan_sriov",
            "storage_cephfs",
            "container_nic_ipfilter",
            "resources_v2",
            "container_exec_user_group_cwd",
            "container_syscall_intercept",
            "container_disk_shift",
            "storage_shifted",
            "resources_infiniband",
            "daemon_storage",
            "instances",
            "image_types",
            "resources_disk_sata",
            "clustering_roles",
            "images_expiry",
            "resources_network_firmware",
            "backup_compression_algorithm",
            "ceph_data_pool_name",
            "container_syscall_intercept_mount",
            "compression_squashfs",
            "container_raw_mount",
            "container_nic_routed",
            "container_syscall_intercept_mount_fuse",
            "container_disk_ceph",
            "virtual-machines",
            "image_profiles",
            "clustering_architecture",
            "resources_disk_id",
            "storage_lvm_stripes",
            "vm_boot_priority",
            "unix_hotplug_devices",
            "api_filtering",
            "instance_nic_network",
            "clustering_sizing",
            "firewall_driver",
            "projects_limits",
            "container_syscall_intercept_hugetlbfs",
            "limits_hugepages",
            "container_nic_routed_gateway",
            "projects_restrictions",
            "custom_volume_snapshot_expiry",
            "volume_snapshot_scheduling",
            "trust_ca_certificates",
            "snapshot_disk_usage",
            "clustering_edit_roles",
            "container_nic_routed_host_address",
            "container_nic_ipvlan_gateway",
            "resources_usb_pci",
            "resources_cpu_threads_numa",
            "resources_cpu_core_die",
            "api_os",
            "container_nic_routed_host_table",
            "container_nic_ipvlan_host_table",
            "container_nic_ipvlan_mode",
            "resources_system",
            "images_push_relay",
            "network_dns_search",
            "container_nic_routed_limits",
            "instance_nic_bridged_vlan",
            "network_state_bond_bridge",
            "usedby_consistency",
            "custom_block_volumes",
            "clustering_failure_domains",
            "resources_gpu_mdev",
            "console_vga_type",
            "projects_limits_disk",
            "network_type_macvlan",
            "network_type_sriov",
            "container_syscall_intercept_bpf_devices",
            "network_type_ovn",
            "projects_networks",
            "projects_networks_restricted_uplinks",
            "custom_volume_backup",
            "backup_override_name",
            "storage_rsync_compression",
            "network_type_physical",
            "network_ovn_external_subnets",
            "network_ovn_nat",
            "network_ovn_external_routes_remove",
            "tpm_device_type",
            "storage_zfs_clone_copy_rebase",
            "gpu_mdev",
            "resources_pci_iommu",
            "resources_network_usb",
            "resources_disk_address",
            "network_physical_ovn_ingress_mode",
            "network_ovn_dhcp",
            "network_physical_routes_anycast",
            "projects_limits_instances",
            "network_state_vlan",
            "instance_nic_bridged_port_isolation",
            "instance_bulk_state_change",
            "network_gvrp",
            "instance_pool_move",
            "gpu_sriov",
            "pci_device_type",
            "storage_volume_state",
            "network_acl",
            "migration_stateful",
            "disk_state_quota",
            "storage_ceph_features",
            "projects_compression",
            "projects_images_remote_cache_expiry",
            "certificate_project",
            "network_ovn_acl",
            "projects_images_auto_update",
            "projects_restricted_cluster_target",
            "images_default_architecture",
            "network_ovn_acl_defaults",
            "gpu_mig",
            "project_usage",
            "network_bridge_acl",
            "warnings",
            "projects_restricted_backups_and_snapshots",
            "clustering_join_token",
            "clustering_description",
            "server_trusted_proxy",
            "clustering_update_cert",
            "storage_api_project",
            "server_instance_driver_operational",
            "server_supported_storage_drivers",
            "event_lifecycle_requestor_address",
            "resources_gpu_usb",
            "clustering_evacuation",
            "network_ovn_nat_address",
            "network_bgp",
            "network_forward",
            "custom_volume_refresh",
            "network_counters_errors_dropped",
            "metrics",
            "image_source_project",
            "clustering_config",
            "network_peer",
            "linux_sysctl",
            "network_dns",
            "ovn_nic_acceleration",
            "certificate_self_renewal",
            "instance_project_move",
            "storage_volume_project_move",
            "cloud_init",
            "network_dns_nat",
            "database_leader",
            "instance_all_projects",
            "clustering_groups",
            "ceph_rbd_du",
            "instance_get_full",
            "qemu_metrics",
            "gpu_mig_uuid",
            "event_project",
            "clustering_evacuation_live",
            "instance_allow_inconsistent_copy",
            "network_state_ovn",
            "storage_volume_api_filtering",
            "image_restrictions",
            "storage_zfs_export",
            "network_dns_records",
            "storage_zfs_reserve_space",
            "network_acl_log",
            "storage_zfs_blocksize",
            "metrics_cpu_seconds",
            "instance_snapshot_never",
            "certificate_token",
            "instance_nic_routed_neighbor_probe",
            "event_hub",
            "agent_nic_config",
            "projects_restricted_intercept",
            "metrics_authentication",
            "images_target_project",
            "cluster_migration_inconsistent_copy",
            "cluster_ovn_chassis",
            "container_syscall_intercept_sched_setscheduler",
            "storage_lvm_thinpool_metadata_size",
            "storage_volume_state_total",
            "instance_file_head",
            "instances_nic_host_name",
            "image_copy_profile",
            "container_syscall_intercept_sysinfo",
            "clustering_evacuation_mode",
            "resources_pci_vpd",
            "qemu_raw_conf",
            "storage_cephfs_fscache",
            "network_load_balancer",
            "vsock_api",
            "instance_ready_state",
            "network_bgp_holdtime",
            "storage_volumes_all_projects",
            "metrics_memory_oom_total",
            "storage_buckets",
            "storage_buckets_create_credentials",
            "metrics_cpu_effective_total",
            "projects_networks_restricted_access",
            "storage_buckets_local",
            "loki",
            "acme",
            "internal_metrics",
            "cluster_join_token_expiry",
            "remote_token_expiry",
            "init_preseed",
            "storage_volumes_created_at",
            "cpu_hotplug",
            "projects_networks_zones",
            "network_txqueuelen",
            "cluster_member_state",
            "instances_placement_scriptlet",
            "storage_pool_source_wipe",
            "zfs_block_mode",
            "instance_generation_id",
            "disk_io_cache",
            "amd_sev",
            "storage_pool_loop_resize",
            "migration_vm_live",
            "ovn_nic_nesting",
            "oidc",
            "network_ovn_l3only",
            "ovn_nic_acceleration_vdpa",
            "cluster_healing",
            "instances_state_total",
            "auth_user",
            "security_csm",
            "instances_rebuild",
            "numa_cpu_placement",
            "custom_volume_iso",
            "network_allocations",
            "storage_api_remote_volume_snapshot_copy",
            "zfs_delegate",
            "operations_get_query_all_projects",
            "metadata_configuration",
            "syslog_socket",
            "event_lifecycle_name_and_project",
            "instances_nic_limits_priority",
            "disk_initial_volume_configuration",
            "operation_wait",
            "cluster_internal_custom_volume_copy",
            "disk_io_bus",
            "storage_cephfs_create_missing",
            "instance_move_config",
            "ovn_ssl_config",
            "init_preseed_storage_volumes",
            "metrics_instances_count",
            "server_instance_type_info",
            "resources_disk_mounted",
            "server_version_lts",
            "oidc_groups_claim",
            "loki_config_instance",
            "storage_volatile_uuid",
            "import_instance_devices",
            "instances_uefi_vars",
            "instances_migration_stateful",
            "container_syscall_filtering_allow_deny_syntax",
            "access_management",
            "vm_disk_io_limits",
            "storage_volumes_all",
            "instances_files_modify_permissions",
            "image_restriction_nesting",
            "container_syscall_intercept_finit_module",
            "device_usb_serial",
            "network_allocate_external_ips",
            "explicit_trust_token"
        ],
        "api_status": "stable",
        "api_version": "1.0",
        "auth": "trusted",
        "public": false,
        "auth_methods": [
            "tls"
        ],
        "auth_user_name": "paelzer",
        "auth_user_method": "unix",
        "environment": {
            "addresses": [],
            "architectures": [
                "x86_64",
                "i686"
            ],
            "certificate": "-----BEGIN CERTIFICATE-----\nMIIB9TCCAXygAwIBAgIRAMYc8ESl2OfbTJSXPUG4uxkwCgYIKoZIzj0EAwMwKjEM\nMAoGA1UEChMDTFhEMRowGAYDVQQDDBFyb290QEtlc2NoZGVpY2hlbDAeFw0yMzA5\nMjYwODIzMzFaFw0zMzA5MjMwODIzMzFaMCoxDDAKBgNVBAoTA0xYRDEaMBgGA1UE\nAwwRcm9vdEBLZXNjaGRlaWNoZWwwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATz+fMB\n6oY7bWtMEZOMdG2GSIGAUJCM+3o1vs7iIVNNC4RTKiOZZFGXgC5g8GVCYNxkFQzw\nsIZwTs6ZzYse6VbHURm2791nbV9GB3rx4gt8GdNUCSX9SMHllvZXH4YZ8OijZjBk\nMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8E\nAjAAMC8GA1UdEQQoMCaCDEtlc2NoZGVpY2hlbIcEfwAAAYcQAAAAAAAAAAAAAAAA\nAAAAATAKBggqhkjOPQQDAwNnADBkAjAGiyYsJj61S7qcvxgxAQOq/DdB4p21zFdb\nVvwBZ8N4ZYLs7UKX9Q5Lko47TQA+cUUCMHRNUu1xdBN/n4EhP6v3PqfWbGimdoCp\nsn/ree/oWiUdIpKu3v34lr4enZ2lhrPJ8w==\n-----END CERTIFICATE-----\n",
            "certificate_fingerprint": "d229b65230ecd065e500728ad52c64f50a8a89c37987a998f1ba42d50dca3827",
            "driver": "lxc",
            "driver_version": "6.0.0",
            "instance_types": [
                "container"
            ],
            "firewall": "nftables",
            "kernel": "Linux",
            "kernel_architecture": "x86_64",
            "kernel_features": {
                "idmapped_mounts": "true",
                "netnsid_getifaddrs": "true",
                "seccomp_listener": "true",
                "seccomp_listener_continue": "true",
                "uevent_injection": "true",
                "unpriv_fscaps": "false"
            },
            "kernel_version": "6.8.0-31-generic",
            "lxc_features": {
                "cgroup2": "true",
                "core_scheduling": "true",
                "devpts_fd": "true",
                "idmapped_mounts_v2": "true",
                "mount_injection_file": "true",
                "network_gateway_device_route": "true",
                "network_ipvlan": "true",
                "network_l2proxy": "true",
                "network_phys_macvlan_mtu": "true",
                "network_veth_router": "true",
                "pidfd": "true",
                "seccomp_allow_deny_syntax": "true",
                "seccomp_notify": "true",
                "seccomp_proxy_send_notify_fd": "true"
            },
            "os_name": "Ubuntu",
            "os_version": "24.04",
            "project": "default",
            "server": "lxd",
            "server_clustered": false,
            "server_event_mode": "full-mesh",
            "server_name": "Keschdeichel",
            "server_pid": 2738834,
            "server_version": "6.1",
            "server_lts": false,
            "storage": "zfs",
            "storage_version": "2.2.2-0ubuntu9",
            "storage_supported_drivers": [
                {
                    "Name": "cephobject",
                    "Version": "17.2.7",
                    "Remote": true
                },
                {
                    "Name": "dir",
                    "Version": "1",
                    "Remote": false
                },
                {
                    "Name": "lvm",
                    "Version": "2.03.11(2) (2021-01-08) / 1.02.175 (2021-01-08) / 4.48.0",
                    "Remote": false
                },
                {
                    "Name": "powerflex",
                    "Version": "1.16 (nvme-cli)",
                    "Remote": true
                },
                {
                    "Name": "zfs",
                    "Version": "2.2.2-0ubuntu9",
                    "Remote": false
                },
                {
                    "Name": "btrfs",
                    "Version": "5.16.2",
                    "Remote": false
                },
                {
                    "Name": "ceph",
                    "Version": "17.2.7",
                    "Remote": true
                },
                {
                    "Name": "cephfs",
                    "Version": "17.2.7",
                    "Remote": true
                }
            ]
        }
    } 
Creating j-vm
DEBUG  [2024-07-18T09:53:10+02:00] Connecting to a remote simplestreams server   URL="https://cloud-images.ubuntu.com/daily"
DEBUG  [2024-07-18T09:53:10+02:00] Connected to the websocket: ws://unix.socket/1.0/events 
DEBUG  [2024-07-18T09:53:10+02:00] Sending request to LXD                        etag= method=POST url="http://unix.socket/1.0/instances"
DEBUG  [2024-07-18T09:53:10+02:00] Got operation from LXD                       
DEBUG  [2024-07-18T09:53:10+02:00] 
    {
        "id": "87d2c69b-7cbc-4fa8-8e61-31e6693242ca",
        "class": "task",
        "description": "Creating instance",
        "created_at": "2024-07-18T09:53:10.348678886+02:00",
        "updated_at": "2024-07-18T09:53:10.348678886+02:00",
        "status": "Running",
        "status_code": 103,
        "resources": {
            "instances": [
                "/1.0/instances/j-vm"
            ]
        },
        "metadata": null,
        "may_cancel": false,
        "err": "",
        "location": "none"
    } 
DEBUG  [2024-07-18T09:53:10+02:00] Sending request to LXD                        etag= method=GET url="http://unix.socket/1.0/operations/87d2c69b-7cbc-4fa8-8e61-31e6693242ca"
DEBUG  [2024-07-18T09:53:10+02:00] Got response struct from LXD                 
DEBUG  [2024-07-18T09:53:10+02:00] 
    {
        "id": "87d2c69b-7cbc-4fa8-8e61-31e6693242ca",
        "class": "task",
        "description": "Creating instance",
        "created_at": "2024-07-18T09:53:10.348678886+02:00",
        "updated_at": "2024-07-18T09:53:10.348678886+02:00",
        "status": "Running",
        "status_code": 103,
        "resources": {
            "instances": [
                "/1.0/instances/j-vm"
            ]
        },
        "metadata": null,
        "may_cancel": false,
        "err": "",
        "location": "none"
    } 
Error: Failed instance creation: Failed creating instance record: Instance type "virtual-machine" is not supported on this server: QEMU failed to run feature checks

Output lxc monitor while reproducing the issue

location: none
metadata:
  context:
    ip: '@'
    method: GET
    protocol: unix
    url: /1.0
    username: paelzer
  level: debug
  message: Handling API request
timestamp: "2024-07-18T09:53:10.327366371+02:00"
type: logging

location: none
metadata:
  context:
    ip: '@'
    method: GET
    protocol: unix
    url: /1.0/events
    username: paelzer
  level: debug
  message: Handling API request
timestamp: "2024-07-18T09:53:10.346127144+02:00"
type: logging

location: none
metadata:
  context:
    id: 974287f6-5410-422a-9c4b-f328155c057c
    local: /var/snap/lxd/common/lxd/unix.socket
    remote: '@'
  level: debug
  message: Event listener server handler started
timestamp: "2024-07-18T09:53:10.346386143+02:00"
type: logging

location: none
metadata:
  context: {}
  level: debug
  message: Responding to instance create
timestamp: "2024-07-18T09:53:10.346920933+02:00"
type: logging

location: none
metadata:
  context:
    ip: '@'
    method: POST
    protocol: unix
    url: /1.0/instances
    username: paelzer
  level: debug
  message: Handling API request
timestamp: "2024-07-18T09:53:10.346882555+02:00"
type: logging

location: none
metadata:
  context:
    class: task
    description: Creating instance
    operation: 87d2c69b-7cbc-4fa8-8e61-31e6693242ca
    project: default
  level: debug
  message: New operation
timestamp: "2024-07-18T09:53:10.362623748+02:00"
type: logging

location: none
metadata:
  context:
    class: task
    description: Creating instance
    operation: 87d2c69b-7cbc-4fa8-8e61-31e6693242ca
    project: default
  level: debug
  message: Started operation
timestamp: "2024-07-18T09:53:10.362697968+02:00"
type: logging

location: none
metadata:
  class: task
  created_at: "2024-07-18T09:53:10.348678886+02:00"
  description: Creating instance
  err: ""
  id: 87d2c69b-7cbc-4fa8-8e61-31e6693242ca
  location: none
  may_cancel: false
  metadata: null
  resources:
    instances:
    - /1.0/instances/j-vm
  status: Running
  status_code: 103
  updated_at: "2024-07-18T09:53:10.348678886+02:00"
project: default
timestamp: "2024-07-18T09:53:10.362710664+02:00"
type: operation

location: none
metadata:
  class: task
  created_at: "2024-07-18T09:53:10.348678886+02:00"
  description: Creating instance
  err: ""
  id: 87d2c69b-7cbc-4fa8-8e61-31e6693242ca
  location: none
  may_cancel: false
  metadata: null
  resources:
    instances:
    - /1.0/instances/j-vm
  status: Pending
  status_code: 105
  updated_at: "2024-07-18T09:53:10.348678886+02:00"
project: default
timestamp: "2024-07-18T09:53:10.362677721+02:00"
type: operation

location: none
metadata:
  context:
    URL: https://cloud-images.ubuntu.com/daily
  level: debug
  message: Connecting to a remote simplestreams server
timestamp: "2024-07-18T09:53:10.363238278+02:00"
type: logging

location: none
metadata:
  context:
    ip: '@'
    method: GET
    protocol: unix
    url: /1.0/operations/87d2c69b-7cbc-4fa8-8e61-31e6693242ca
    username: paelzer
  level: debug
  message: Handling API request
timestamp: "2024-07-18T09:53:10.363470984+02:00"
type: logging

location: none
metadata:
  context:
    fingerprint: a9585abb92b2a92742ce8a8bbfd601f37858e2e89f5e93ee46fc25b7743552a2
  level: debug
  message: Lock acquired for image
timestamp: "2024-07-18T09:53:10.385011361+02:00"
type: logging

location: none
metadata:
  context:
    fingerprint: a9585abb92b2a92742ce8a8bbfd601f37858e2e89f5e93ee46fc25b7743552a2
  level: debug
  message: Acquiring lock for image
timestamp: "2024-07-18T09:53:10.38498896+02:00"
type: logging

location: none
metadata:
  context:
    fingerprint: a9585abb92b2a92742ce8a8bbfd601f37858e2e89f5e93ee46fc25b7743552a2
  level: debug
  message: Image already exists in the DB
timestamp: "2024-07-18T09:53:10.387730608+02:00"
type: logging

location: none
metadata:
  context:
    class: task
    description: Creating instance
    err: 'Failed creating instance record: Instance type "virtual-machine" is not
      supported on this server: QEMU failed to run feature checks'
    operation: 87d2c69b-7cbc-4fa8-8e61-31e6693242ca
    project: default
  level: debug
  message: Failure for operation
timestamp: "2024-07-18T09:53:10.387896527+02:00"
type: logging

location: none
metadata:
  class: task
  created_at: "2024-07-18T09:53:10.348678886+02:00"
  description: Creating instance
  err: 'Failed creating instance record: Instance type "virtual-machine" is not supported
    on this server: QEMU failed to run feature checks'
  id: 87d2c69b-7cbc-4fa8-8e61-31e6693242ca
  location: none
  may_cancel: false
  metadata: null
  resources:
    instances:
    - /1.0/instances/j-vm
  status: Failure
  status_code: 400
  updated_at: "2024-07-18T09:53:10.348678886+02:00"
project: default
timestamp: "2024-07-18T09:53:10.387928581+02:00"
type: operation

location: none
metadata:
  context:
    listener: 974287f6-5410-422a-9c4b-f328155c057c
    local: /var/snap/lxd/common/lxd/unix.socket
    remote: '@'
  level: debug
  message: Event listener server handler stopped
timestamp: "2024-07-18T09:53:10.389772719+02:00"
type: logging

As mentioned I was suspicious on the recent auto-upgrade I looked after these

$ snap changes
ID   Status  Spawn                    Ready                    Summary
399  Done    yesterday at 14:05 CEST  yesterday at 14:06 CEST  Auto-refresh snap "lxd"
$ snap list  lxd --all
Name  Version      Rev    Tracking       Publisher   Notes
lxd   6.1-90889b0  29398  latest/stable  canonical✓  disabled
lxd   6.1-0d4d89b  29469  latest/stable  canonical✓  -
$ sudo snap revert lxd
2024-07-18T09:58:28+02:00 INFO Waiting for "snap.lxd.daemon.service" to stop.
lxd reverted to 6.1-90889b0
$ snap list  lxd --all
Name  Version      Rev    Tracking       Publisher   Notes
lxd   6.1-90889b0  29398  latest/stable  canonical✓  -
lxd   6.1-0d4d89b  29469  latest/stable  canonical✓  disabled
$ lxc launch ubuntu-daily:n n-vm --ephemeral --vm
Creating n-vm
Error: Failed instance creation: Failed creating instance record: Instance type "virtual-machine" is not supported on this server: QEMU failed to run feature checks

So going one back didn't work.

tomponline commented 1 month ago

Does 5.21/stable work for you?

We test on Noble (https://github.com/canonical/lxd-ci/actions/runs/9986795035) before releasing and daily.

This is the problem line:

time="2024-07-18T09:40:31+02:00" level=error msg="Unable to run feature checks during QEMU initialization: open /tmp/1373261747: no such file or directory"

I would think it was this call that is failing:

https://github.com/canonical/lxd/blob/18550148ee89231f4fa2472eb0907795529c2aaf/lxd/instance/drivers/driver_qemu.go#L8600

Suggesting that LXD doesn't have access to /tmp inside the snap's mount namespace.

tomponline commented 1 month ago

I actually found that launching a VM will only re-show the message But not re-probe and re-trigger the underlying issue. But runnung sudo systemctl restart snap.lxd.daemon will make it re-probe and re-fail according to the logs below.

This is expected behaviour, feature checks are only done at start up, not on every launch/start.

tomponline commented 1 month ago

Just tried a reproducer here:

lxc launch ubuntu-daily:noble v1 --vm -c limits.memory=2GiB
lxc exec v1 -- snap install lxd --channel=latest/stable

lxc exec v1 -- uname -a
Linux v1 6.8.0-36-generic #36-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun 10 10:49:14 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

lxc exec v1 -- snap list
Name    Version      Rev    Tracking       Publisher   Notes
core22  20240408     1380   latest/stable  canonical✓  base
lxd     6.1-0d4d89b  29469  latest/stable  canonical✓  -
snapd   2.63         21759  latest/stable  canonical✓  snapd

lxc exec v1 -- lxd init --auto
lxc exec v1 -- lxc launch ubuntu-daily:j j-vm --ephemeral --vm
lxc exec v1 -- lxc list
+------+---------+------------------------+------------------------------------------------+-----------------------------+-----------+
| NAME |  STATE  |          IPV4          |                      IPV6                      |            TYPE             | SNAPSHOTS |
+------+---------+------------------------+------------------------------------------------+-----------------------------+-----------+
| j-vm | RUNNING | 10.102.95.156 (enp5s0) | fd42:7bde:aa78:7f4:216:3eff:fe3a:f81e (enp5s0) | VIRTUAL-MACHINE (EPHEMERAL) | 0         |
+------+---------+------------------------+------------------------------------------------+-----------------------------+-----------+

So looks like working generally, but something is different on your host preventing /tmp from being accessible at LXD start time.

Does this occur every time you reload the snap?

Did you try doing snap stop lxd, then snap start lxd?

cpaelzer commented 1 month ago

Joint debugging session, thanks @tomponline

Checking /tmp in the namespace

$ sudo nsenter --mount=/run/snapd/ns/lxd.mnt --
$ touch /tmp/foo
touch: cannot touch '/tmp/foo': No such file or directory

Interesting, that is the same issue

$ findmnt /tmp/
TARGET SOURCE                                                      FSTYPE OPTIONS
/tmp   /dev/nvme0n1p5[/tmp]                                        ext4   rw,relatime,errors=remount-ro
/tmp   /dev/nvme0n1p5[/tmp/snap-private-tmp/snap.lxd/tmp//deleted] ext4   rw,relatime,errors=remount-ro

So it is deleted, but why?

findmnt /tmp/
TARGET SOURCE                                                      FSTYPE OPTIONS
/tmp   /dev/nvme0n1p5[/tmp]                                        ext4   rw,relatime,errors=remount-ro
/tmp   /dev/nvme0n1p5[/tmp/snap-private-tmp/snap.lxd/tmp//deleted] ext4   rw,relatime,errors=remount-ro

So it is deleted from the namespace POV

ll /tmp/snap-private-tmp
total 52
drwx------  8 root root  4096 Jul  1 08:32 ./
drwxrwxrwt 40 root root 20480 Jul 18 11:02 ../
drwx------  3 root root  4096 Mai 21 08:11 snap.canonical-livepatch/
drwx------  3 root root  4096 Mai 23 20:51 snap.element-desktop/
drwx------  3 root root  4096 Mai 23 20:51 snap.firefox/
drwx------  3 root root  4096 Mai 21 08:11 snap.ncspot/
drwx------  3 root root  4096 Jun 27 08:50 snap.ppa-dev-tools/
drwx------  3 root root  4096 Jun 19 15:06 snap.ustriage/

And indeed on the host it is no there?

Satrt/Stopping to reset sudo snap stop lxd + sudo snap start lxd but that did not set up the paths.

Question for now - shouldn't snapd set those up?

cpaelzer commented 1 month ago

Things that do not re-eastablish the mount

What works is manually restoring the mount

That gets us back to

$ findmnt /tmp/
TARGET SOURCE                                             FSTYPE OPTIONS
/tmp   /dev/nvme0n1p5[/tmp]                               ext4   rw,relatime,errors=remount-ro
/tmp   /dev/nvme0n1p5[/tmp/snap-private-tmp/snap.lxd/tmp] ext4   rw,relatime,errors=remount-ro

And from there we can run sudo systemctl reload snap.lxd.daemon.service to recheck capabilities and now guest VMs can be started again.

Update from the future (to be complete in one place), later discussions showed that the following restores the system

snap disable lxd
snap enable lxd

Mystery: what lost it in the first place?

cpaelzer commented 1 month ago

To state things we checked due to later discussions in the snappy channel:

tomponline commented 1 month ago

We've been advised that snap disable lxd might have allowed us to remove the old mount.

norbertoisaac commented 1 month ago

Maybe same issue? #13746

tomponline commented 1 month ago

Yes looks very similar, and so we know its not a 6.1 issue now too thanks.

tomponline commented 1 month ago

Using snap disable lxd followed by snap enable lxd after recreating the missing directory confirmed to fix the snap mount in https://github.com/canonical/lxd/issues/13746#issuecomment-2237539249