canonical / lxd

Powerful system container and virtual machine manager
https://canonical.com/lxd
GNU Affero General Public License v3.0
4.29k stars 923 forks source link

Force stopping a VM backed by LVM (!= thin) often fails to umount the LV #12808

Closed simondeziel closed 5 months ago

simondeziel commented 5 months ago

tests/storage-vm failed multiple times on:

...
Device cloudinit added to v1
+ sleep 3
+ lxc exec v1 -- mount -t iso9660 -o ro /dev/sr0 /mnt
+ lxc exec v1 -- umount /dev/sr0
+ lxc config device remove v1 cloudinit
Device cloudinit removed from v1
+ lxc exec v1 -- stat /dev/sr0
stat: cannot statx '/dev/sr0': No such file or directory
+ echo '==> Stopping VM'
==> Stopping VM
+ lxc stop -f v1
Error: Failed unmounting instance: Failed to unmount LVM logical volume: Failed to unmount "/var/snap/lxd/common/lxd/storage-pools/vmpool560333/virtual-machines/v1": device or resource busy
Try `lxc info --show-log v1` for more info
+ cleanup
+ set +e
+ echo ''

+ '[' 1 = 1 ']'
+ echo 'Test failed'

It sometimes succeed however.

Additional information:

$ snap list lxd
Name  Version        Rev    Tracking       Publisher   Notes
lxd   5.0.3-974ce5c  26838  5.0/candidate  canonical✓  -
$ lxc info
config:
  storage.images_volume: vmpool732059/images
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- vsock_api
- storage_volumes_all_projects
- projects_networks_restricted_access
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- cpu_hotplug
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- storage_pool_loop_resize
- migration_vm_live
- auth_user
- instances_state_total
- numa_cpu_placement
- network_allocations
- storage_api_remote_volume_snapshot_copy
- zfs_delegate
- operations_get_query_all_projects
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- operation_wait
- cluster_internal_custom_volume_copy
- instance_move_config
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
auth_user_name: ubuntu
auth_user_method: unix
environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIIB3TCCAWOgAwIBAgIQGOyOwYjfmhXIrRzKTeePZTAKBggqhkjOPQQDAzAiMQww
    CgYDVQQKEwNMWEQxEjAQBgNVBAMMCXJvb3RAbHVtYTAeFw0yNDAyMDExNTA2Mjha
    Fw0zNDAxMjkxNTA2MjhaMCIxDDAKBgNVBAoTA0xYRDESMBAGA1UEAwwJcm9vdEBs
    dW1hMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAERJjXQZyLOnLIlkh81dHLIeStrh85
    k0U/NfhQ478ghPHzlfXjVVQj2sxi0OibqUBh647Yutjf/Ak+UnJbxQjr17lAgZE7
    7UVA4AUHEqLwY3udMBUxL7q/nUOwbyiwGXWoo14wXDAOBgNVHQ8BAf8EBAMCBaAw
    EwYDVR0lBAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAnBgNVHREEIDAeggRs
    dW1hhwR/AAABhxAAAAAAAAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2gAMGUCMQDa
    bAr+2so2u6/gTP1oOmRnS8SPPsbIIOKff2u3vwqy3TqxmSf4WqDDhUalAIOBarIC
    MGIHGRMMrPMVKAvRArZXxe3MfXBrDvyPwMa5ptDOsEjVRu7mfMv9DQ2h8ff1AhlG
    nA==
    -----END CERTIFICATE-----
  certificate_fingerprint: 7ce3dc2e7c3c4c1c10e5474e3f354652972cf4de643086663f8f288bdc104683
  driver: lxc | qemu
  driver_version: 5.0.3 | 8.0.5
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    shiftfs: "false"
    uevent_injection: "true"
    unpriv_fscaps: "true"
  kernel_version: 5.15.0-92-generic
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Ubuntu
  os_version: "22.04"
  project: default
  server: lxd
  server_clustered: false
  server_event_mode: full-mesh
  server_name: luma
  server_pid: 733802
  server_version: 5.0.3
  storage: zfs
  storage_version: 2.1.5-1ubuntu6~22.04.1
  storage_supported_drivers:
  - name: lvm
    version: 2.03.07(2) (2019-11-30) / 1.02.167 (2019-11-30) / 4.45.0
    remote: false
  - name: zfs
    version: 2.1.5-1ubuntu6~22.04.1
    remote: false
  - name: btrfs
    version: 5.4.1
    remote: false
  - name: ceph
    version: 15.2.17
    remote: true
  - name: cephfs
    version: 15.2.17
    remote: true
  - name: cephobject
    version: 15.2.17
    remote: true
  - name: dir
    version: "1"
    remote: false
tomponline commented 5 months ago

@simondeziel do you only observe this on 5.0.3?

tomponline commented 5 months ago

Also observed on latest/edge:

Device cloudinit removed from v1
+ lxc exec v1 -- stat /dev/sr0
stat: cannot statx '/dev/sr0': No such file or directory
+ echo '==> Stopping VM'
==> Stopping VM
+ lxc stop -f v1
Error: Failed unmounting instance: Failed to unmount LVM logical volume: Failed to unmount "/var/snap/lxd/common/lxd/storage-pools/vmpool-lvm-2563/virtual-machines/v1": device or resource busy
Try `lxc info --show-log v1` for more info
simondeziel commented 5 months ago

This also affects rarely hits lvm thin on 5.0/edge, see https://github.com/canonical/lxd-ci/actions/runs/8029713685/job/21936309474#step:6:1700:

+ lxc exec v1 -- stat /dev/sr0
stat: cannot statx '/dev/sr0': No such file or directory
+ echo '==> Stopping VM'
==> Stopping VM
+ lxc stop -f v1
Error: Failed unmounting instance: Failed to unmount LVM logical volume: Failed to unmount "/var/snap/lxd/common/lxd/storage-pools/vmpool-lvm-thin-2442/virtual-machines/v1": device or resource busy
Try `lxc info --show-log v1` for more info
tomponline commented 5 months ago

Reproduced this

architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 22.04 LTS amd64 (minimal daily) (20240227)
  image.label: minimal daily
  image.os: ubuntu
  image.release: jammy
  image.serial: "20240227"
  image.type: disk1.img
  image.version: "22.04"
  volatile.base_image: c2948cec8e6573161b4a149db1438c94b0bcf786c27ccfb79bc1bbbba9ae5555
  volatile.cloud-init.instance-id: f29c5bb3-d7fa-4ce4-821e-09efe9055764
  volatile.eth0.hwaddr: 00:16:3e:51:82:08
  volatile.last_state.power: STOPPED
  volatile.last_state.ready: "false"
  volatile.uuid: 788c2725-7e38-4526-be5b-cded085f7c70
  volatile.uuid.generation: 788c2725-7e38-4526-be5b-cded085f7c70
  volatile.vsock_id: "1334420875"
devices:
  block1:
    readonly: "false"
    source: /tmp/lxd-test-vmpool-lvm-1312/lxd-test-block
    type: disk
  cloudinit:
    source: cloud-init:config
    type: disk
  root:
    path: /
    pool: vmpool-lvm-1312
    type: disk
tomponline commented 5 months ago

LXD is keeping a file handle open to the config drive iso.

ls /proc/7005/fd -l
total 0
lr-x------ 1 root root 64 Feb 28 15:20 0 -> /dev/null
lrwx------ 1 root root 64 Feb 28 15:20 1 -> 'socket:[42874]'
lrwx------ 1 root root 64 Feb 28 15:20 10 -> /var/snap/lxd/common/lxd/database/local.db
lrwx------ 1 root root 64 Feb 28 15:20 11 -> 'socket:[45117]'
lrwx------ 1 root root 64 Feb 28 15:20 12 -> 'socket:[45114]'
lrwx------ 1 root root 64 Feb 28 15:20 13 -> 'anon_inode:[eventpoll]'
lr-x------ 1 root root 64 Feb 28 15:20 14 -> 'pipe:[45111]'
l-wx------ 1 root root 64 Feb 28 15:20 15 -> 'pipe:[45111]'
lr-x------ 1 root root 64 Feb 28 15:20 16 -> 'pipe:[45112]'
l-wx------ 1 root root 64 Feb 28 15:20 17 -> 'pipe:[45112]'
lrwx------ 1 root root 64 Feb 28 15:20 18 -> 'anon_inode:[eventfd]'
l-wx------ 1 root root 64 Feb 28 15:20 19 -> '/var/snap/lxd/common/lxd/database/global/.probe_fallocate (deleted)'
lrwx------ 1 root root 64 Feb 28 15:20 2 -> 'socket:[42874]'
lr-x------ 1 root root 64 Feb 28 15:20 20 -> /dev/null
lrwx------ 1 root root 64 Feb 28 15:20 21 -> 'socket:[45118]'
lrwx------ 1 root root 64 Feb 28 15:20 22 -> 'socket:[43731]'
l-wx------ 1 root root 64 Feb 28 15:20 24 -> /var/snap/lxd/common/lxd/database/global/open-1
lrwx------ 1 root root 64 Feb 28 15:20 25 -> 'anon_inode:[eventfd]'
l-wx------ 1 root root 64 Feb 28 15:20 26 -> /var/snap/lxd/common/lxd/database/global/open-2
l-wx------ 1 root root 64 Feb 28 15:20 27 -> /var/snap/lxd/common/lxd/database/global/open-3
lrwx------ 1 root root 64 Feb 28 15:20 28 -> 'anon_inode:[fanotify]'
lrwx------ 1 root root 64 Feb 28 15:20 29 -> 'socket:[42978]'
lrwx------ 1 root root 64 Feb 28 14:51 3 -> 'socket:[67874]'
lr-x------ 1 root root 64 Feb 28 15:20 30 -> /dev
lrwx------ 1 root root 64 Feb 28 15:20 31 -> 'socket:[45172]'
lr-x------ 1 root root 64 Feb 28 14:49 36 -> /var/snap/lxd/common/lxd/storage-pools/vmpool-lvm-1312/virtual-machines/v1/config.iso
l-wx------ 1 root root 64 Feb 28 15:20 4 -> /var/snap/lxd/common/lxd/logs/lxd.log
lrwx------ 1 root root 64 Feb 28 15:20 5 -> 'anon_inode:[eventpoll]'
lr-x------ 1 root root 64 Feb 28 15:20 6 -> 'pipe:[44515]'
l-wx------ 1 root root 64 Feb 28 15:20 7 -> 'pipe:[44515]'
lrwx------ 1 root root 64 Feb 28 15:20 8 -> 'socket:[29191]'
lrwx------ 1 root root 64 Feb 28 15:20 9 -> 'socket:[45116]'
simondeziel commented 5 months ago

Unrelated but should LXD/dqlite close the FD associated with fallocate support probing?

l-wx------ 1 root root 64 Feb 28 15:20 19 -> '/var/snap/lxd/common/lxd/database/global/.probe_fallocate (deleted)'
tomponline commented 5 months ago

Unrelated but should LXD/dqlite close the FD associated with fallocate support probing? l-wx------ 1 root root 64 Feb 28 15:20 19 -> '/var/snap/lxd/common/lxd/database/global/.probe_fallocate (deleted)'

@simondeziel a question for @cole-miller