canonical / lxd

Powerful system container and virtual machine manager
https://canonical.com/lxd
GNU Affero General Public License v3.0
4.33k stars 926 forks source link

ACLs not saved and restored to/from container image #11901

Closed kimfaint closed 1 year ago

kimfaint commented 1 year ago

Required information

Issue description

I noticed that if ACLs are applied inside a running container (e.g. seffacl -m g:foo:rwx /var/httpd/conf.d) they persist within the container instance. But if you stop the instance and publish as an image, then launch a new instance from the image, if you look at the ACLs on the new container they are not there.

Related Forum Post

Steps to reproduce

  1. Create a container, apply an ACL to a file, stop container
    lxc launch images:ubuntu/jammy c1
    lxc exec c1 -- apt install acl -y
    lxc exec c1 -- groupadd foo
    lxc exec c1 -- touch ./foo
    lxc exec c1 -- setfacl -m g:foo:rwx ./foo
    lxc exec c1 -- getfacl foo
    # file: foo
    # owner: root
    # group: root
    user::rw-
    group::r--
    group:foo:rwx
    mask::rwx
    other::r--
    lxc stop c1
  2. Publish container to image, launch new container from image, check ACL
    lxc publish c1
    Instance published with fingerprint: 18c3d7599731a8e77d35525752fea130e272660286dbcd6261fbcf9285e13eb9
    lxc launch 18c3d7599731a8e77d35525752fea130e272660286dbcd6261fbcf9285e13eb9 c2
    Creating c2
    Starting c2                                
    lxc exec c2 -- getfacl foo
    # file: foo
    # owner: root
    # group: root
    user::rw-
    group::rwx
    other::r--

Information to attach

lxc info ``` config: core.https_address: '[::]:8443' core.trust_password: true api_extensions: - storage_zfs_remove_snapshots - container_host_shutdown_timeout - container_stop_priority - container_syscall_filtering - auth_pki - container_last_used_at - etag - patch - usb_devices - https_allowed_credentials - image_compression_algorithm - directory_manipulation - container_cpu_time - storage_zfs_use_refquota - storage_lvm_mount_options - network - profile_usedby - container_push - container_exec_recording - certificate_update - container_exec_signal_handling - gpu_devices - container_image_properties - migration_progress - id_map - network_firewall_filtering - network_routes - storage - file_delete - file_append - network_dhcp_expiry - storage_lvm_vg_rename - storage_lvm_thinpool_rename - network_vlan - image_create_aliases - container_stateless_copy - container_only_migration - storage_zfs_clone_copy - unix_device_rename - storage_lvm_use_thinpool - storage_rsync_bwlimit - network_vxlan_interface - storage_btrfs_mount_options - entity_description - image_force_refresh - storage_lvm_lv_resizing - id_map_base - file_symlinks - container_push_target - network_vlan_physical - storage_images_delete - container_edit_metadata - container_snapshot_stateful_migration - storage_driver_ceph - storage_ceph_user_name - resource_limits - storage_volatile_initial_source - storage_ceph_force_osd_reuse - storage_block_filesystem_btrfs - resources - kernel_limits - storage_api_volume_rename - macaroon_authentication - network_sriov - console - restrict_devlxd - migration_pre_copy - infiniband - maas_network - devlxd_events - proxy - network_dhcp_gateway - file_get_symlink - network_leases - unix_device_hotplug - storage_api_local_volume_handling - operation_description - clustering - event_lifecycle - storage_api_remote_volume_handling - nvidia_runtime - container_mount_propagation - container_backup - devlxd_images - container_local_cross_pool_handling - proxy_unix - proxy_udp - clustering_join - proxy_tcp_udp_multi_port_handling - network_state - proxy_unix_dac_properties - container_protection_delete - unix_priv_drop - pprof_http - proxy_haproxy_protocol - network_hwaddr - proxy_nat - network_nat_order - container_full - candid_authentication - backup_compression - candid_config - nvidia_runtime_config - storage_api_volume_snapshots - storage_unmapped - projects - candid_config_key - network_vxlan_ttl - container_incremental_copy - usb_optional_vendorid - snapshot_scheduling - snapshot_schedule_aliases - container_copy_project - clustering_server_address - clustering_image_replication - container_protection_shift - snapshot_expiry - container_backup_override_pool - snapshot_expiry_creation - network_leases_location - resources_cpu_socket - resources_gpu - resources_numa - kernel_features - id_map_current - event_location - storage_api_remote_volume_snapshots - network_nat_address - container_nic_routes - rbac - cluster_internal_copy - seccomp_notify - lxc_features - container_nic_ipvlan - network_vlan_sriov - storage_cephfs - container_nic_ipfilter - resources_v2 - container_exec_user_group_cwd - container_syscall_intercept - container_disk_shift - storage_shifted - resources_infiniband - daemon_storage - instances - image_types - resources_disk_sata - clustering_roles - images_expiry - resources_network_firmware - backup_compression_algorithm - ceph_data_pool_name - container_syscall_intercept_mount - compression_squashfs - container_raw_mount - container_nic_routed - container_syscall_intercept_mount_fuse - container_disk_ceph - virtual-machines - image_profiles - clustering_architecture - resources_disk_id - storage_lvm_stripes - vm_boot_priority - unix_hotplug_devices - api_filtering - instance_nic_network - clustering_sizing - firewall_driver - projects_limits - container_syscall_intercept_hugetlbfs - limits_hugepages - container_nic_routed_gateway - projects_restrictions - custom_volume_snapshot_expiry - volume_snapshot_scheduling - trust_ca_certificates - snapshot_disk_usage - clustering_edit_roles - container_nic_routed_host_address - container_nic_ipvlan_gateway - resources_usb_pci - resources_cpu_threads_numa - resources_cpu_core_die - api_os - container_nic_routed_host_table - container_nic_ipvlan_host_table - container_nic_ipvlan_mode - resources_system - images_push_relay - network_dns_search - container_nic_routed_limits - instance_nic_bridged_vlan - network_state_bond_bridge - usedby_consistency - custom_block_volumes - clustering_failure_domains - resources_gpu_mdev - console_vga_type - projects_limits_disk - network_type_macvlan - network_type_sriov - container_syscall_intercept_bpf_devices - network_type_ovn - projects_networks - projects_networks_restricted_uplinks - custom_volume_backup - backup_override_name - storage_rsync_compression - network_type_physical - network_ovn_external_subnets - network_ovn_nat - network_ovn_external_routes_remove - tpm_device_type - storage_zfs_clone_copy_rebase - gpu_mdev - resources_pci_iommu - resources_network_usb - resources_disk_address - network_physical_ovn_ingress_mode - network_ovn_dhcp - network_physical_routes_anycast - projects_limits_instances - network_state_vlan - instance_nic_bridged_port_isolation - instance_bulk_state_change - network_gvrp - instance_pool_move - gpu_sriov - pci_device_type - storage_volume_state - network_acl - migration_stateful - disk_state_quota - storage_ceph_features - projects_compression - projects_images_remote_cache_expiry - certificate_project - network_ovn_acl - projects_images_auto_update - projects_restricted_cluster_target - images_default_architecture - network_ovn_acl_defaults - gpu_mig - project_usage - network_bridge_acl - warnings - projects_restricted_backups_and_snapshots - clustering_join_token - clustering_description - server_trusted_proxy - clustering_update_cert - storage_api_project - server_instance_driver_operational - server_supported_storage_drivers - event_lifecycle_requestor_address - resources_gpu_usb - clustering_evacuation - network_ovn_nat_address - network_bgp - network_forward - custom_volume_refresh - network_counters_errors_dropped - metrics - image_source_project - clustering_config - network_peer - linux_sysctl - network_dns - ovn_nic_acceleration - certificate_self_renewal - instance_project_move - storage_volume_project_move - cloud_init - network_dns_nat - database_leader - instance_all_projects - clustering_groups - ceph_rbd_du - instance_get_full - qemu_metrics - gpu_mig_uuid - event_project - clustering_evacuation_live - instance_allow_inconsistent_copy - network_state_ovn - storage_volume_api_filtering - image_restrictions - storage_zfs_export - network_dns_records - storage_zfs_reserve_space - network_acl_log - storage_zfs_blocksize - metrics_cpu_seconds - instance_snapshot_never - certificate_token - instance_nic_routed_neighbor_probe - event_hub - agent_nic_config - projects_restricted_intercept - metrics_authentication - images_target_project - cluster_migration_inconsistent_copy - cluster_ovn_chassis - container_syscall_intercept_sched_setscheduler - storage_lvm_thinpool_metadata_size - storage_volume_state_total - instance_file_head - instances_nic_host_name - image_copy_profile - container_syscall_intercept_sysinfo - clustering_evacuation_mode - resources_pci_vpd - qemu_raw_conf - storage_cephfs_fscache - network_load_balancer - vsock_api - instance_ready_state - network_bgp_holdtime - storage_volumes_all_projects - metrics_memory_oom_total - storage_buckets - storage_buckets_create_credentials - metrics_cpu_effective_total - projects_networks_restricted_access - storage_buckets_local - loki - acme - internal_metrics - cluster_join_token_expiry - remote_token_expiry - init_preseed - storage_volumes_created_at - cpu_hotplug - projects_networks_zones - network_txqueuelen - cluster_member_state - instances_placement_scriptlet - storage_pool_source_wipe - zfs_block_mode - instance_generation_id - disk_io_cache - amd_sev - storage_pool_loop_resize - migration_vm_live - ovn_nic_nesting - oidc - network_ovn_l3only - ovn_nic_acceleration_vdpa - cluster_healing - instances_state_total api_status: stable api_version: "1.0" auth: trusted public: false auth_methods: - tls environment: addresses: - 172.27.5.147:8443 - 172.17.0.1:8443 - 10.236.32.1:8443 - '[fd42:f7:cba5:d1cb::1]:8443' architectures: - x86_64 - i686 certificate: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- certificate_fingerprint: ... driver: qemu | lxc driver_version: 8.0.0 | 5.0.2 firewall: nftables kernel: Linux kernel_architecture: x86_64 kernel_features: idmapped_mounts: "true" netnsid_getifaddrs: "true" seccomp_listener: "true" seccomp_listener_continue: "true" shiftfs: "false" uevent_injection: "true" unpriv_fscaps: "true" kernel_version: 5.19.0-42-generic lxc_features: cgroup2: "true" core_scheduling: "true" devpts_fd: "true" idmapped_mounts_v2: "true" mount_injection_file: "true" network_gateway_device_route: "true" network_ipvlan: "true" network_l2proxy: "true" network_phys_macvlan_mtu: "true" network_veth_router: "true" pidfd: "true" seccomp_allow_deny_syntax: "true" seccomp_notify: "true" seccomp_proxy_send_notify_fd: "true" os_name: Ubuntu os_version: "22.04" project: default server: lxd server_clustered: false server_event_mode: full-mesh server_name: tgapsyd1dbd03 server_pid: 1274717 server_version: "5.14" storage: zfs storage_version: 2.1.5-1ubuntu6 storage_supported_drivers: - name: dir version: "1" remote: false - name: lvm version: 2.03.11(2) (2021-01-08) / 1.02.175 (2021-01-08) / 4.47.0 remote: false - name: zfs version: 2.1.5-1ubuntu6 remote: false - name: btrfs version: 5.16.2 remote: false - name: ceph version: 17.2.5 remote: true - name: cephfs version: 17.2.5 remote: true - name: cephobject version: 17.2.5 remote: true ```
lxc info c1 --show-log ``` lxc info c1 --show-log Name: c1 Status: STOPPED Type: container Architecture: x86_64 Created: 2023/06/28 10:51 AEST Last Used: 2023/06/28 10:51 AEST Log: lxc c1 20230628005147.163 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3621 - newuidmap binary is missing lxc c1 20230628005147.164 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3627 - newgidmap binary is missing lxc c1 20230628005147.165 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3621 - newuidmap binary is missing lxc c1 20230628005147.165 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3627 - newgidmap binary is missing lxc c1 20230628005147.166 WARN cgfsng - ../src/src/lxc/cgroups/cgfsng.c:fchowmodat:1619 - No such file or directory - Failed to fchownat(42, memory.oom.group, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW ) lxc c1 20230628005147.166 WARN cgfsng - ../src/src/lxc/cgroups/cgfsng.c:fchowmodat:1619 - No such file or directory - Failed to fchownat(42, memory.reclaim, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW ) lxc c1 20230628005152.412 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3621 - newuidmap binary is missing lxc c1 20230628005152.412 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3627 - newgidmap binary is missing lxc c1 20230628005202.936 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3621 - newuidmap binary is missing lxc c1 20230628005202.936 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3627 - newgidmap binary is missing lxc c1 20230628005209.864 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3621 - newuidmap binary is missing lxc c1 20230628005209.864 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3627 - newgidmap binary is missing lxc c1 20230628005216.704 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3621 - newuidmap binary is missing lxc c1 20230628005216.704 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3627 - newgidmap binary is missing lxc c1 20230628005226.584 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3621 - newuidmap binary is missing lxc c1 20230628005226.584 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3627 - newgidmap binary is missing ```
lxc info c2 --show-log ``` Name: c2 Status: STOPPED Type: container Architecture: x86_64 Created: 2023/06/28 10:58 AEST Last Used: 2023/06/28 10:58 AEST Log: lxc c2 20230628005832.879 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3621 - newuidmap binary is missing lxc c2 20230628005832.879 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3627 - newgidmap binary is missing lxc c2 20230628005832.881 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3621 - newuidmap binary is missing lxc c2 20230628005832.881 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3627 - newgidmap binary is missing lxc c2 20230628005832.882 WARN cgfsng - ../src/src/lxc/cgroups/cgfsng.c:fchowmodat:1619 - No such file or directory - Failed to fchownat(42, memory.oom.group, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW ) lxc c2 20230628005832.882 WARN cgfsng - ../src/src/lxc/cgroups/cgfsng.c:fchowmodat:1619 - No such file or directory - Failed to fchownat(42, memory.reclaim, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW ) lxc c2 20230628005900.136 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3621 - newuidmap binary is missing lxc c2 20230628005900.136 WARN conf - ../src/src/lxc/conf.c:lxc_map_ids:3627 - newgidmap binary is missing ```
lxc config show c1 --expanded ``` architecture: x86_64 config: image.architecture: amd64 image.description: Ubuntu jammy amd64 (20230627_07:43) image.os: Ubuntu image.release: jammy image.serial: "20230627_07:43" image.type: squashfs image.variant: default volatile.base_image: 057ad566f5be90f3f79bf9e68cb8f5dd75c24ac873cf9f1c968faf23695be0b4 volatile.cloud-init.instance-id: 6c9489a3-8aaa-43cc-a5c7-24f4886ab980 volatile.eth0.hwaddr: 00:16:3e:07:bd:10 volatile.idmap.base: "0" volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]' volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]' volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]' volatile.last_state.power: STOPPED volatile.last_state.ready: "false" volatile.uuid: e7724586-6e48-4938-87d1-2edf36108a9c volatile.uuid.generation: e7724586-6e48-4938-87d1-2edf36108a9c devices: eth0: name: eth0 network: lxdbr0 type: nic root: path: / pool: default type: disk ephemeral: false profiles: - default stateful: false description: "" ```
lxc config show c2 --expanded ``` architecture: x86_64 config: image.architecture: amd64 image.description: Ubuntu jammy amd64 (20230627_07:43) image.name: ubuntu-jammy-amd64-default-20230627_07:43 image.os: ubuntu image.release: jammy image.serial: "20230627_07:43" image.variant: default volatile.base_image: 18c3d7599731a8e77d35525752fea130e272660286dbcd6261fbcf9285e13eb9 volatile.cloud-init.instance-id: 6b58dcd4-17a0-4cf9-a99b-212a31a753aa volatile.eth0.hwaddr: 00:16:3e:a7:0f:3c volatile.idmap.base: "0" volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]' volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]' volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]' volatile.last_state.power: STOPPED volatile.last_state.ready: "false" volatile.uuid: c277dd86-d4d7-4051-8b42-11f2d93639e3 volatile.uuid.generation: c277dd86-d4d7-4051-8b42-11f2d93639e3 devices: eth0: name: eth0 network: lxdbr0 type: nic root: path: / pool: default type: disk ephemeral: false profiles: - default stateful: false description: "" ```
jimi3 commented 1 year ago

Did the steps as posted on the forums by tomp with ubuntu 22.04 , export / import of container and ACLs were lost.

tomponline commented 1 year ago

@roosterfish if you confirm the issue please can you add the bug label. Thanks

roosterfish commented 1 year ago

I was able to narrow down the situation a bit more. It only happens for certain storage pool drivers. When using dir the ACLs get restored correctly.

The affected storage backends seem to be:

So I would classify it as a bug then.

roosterfish commented 1 year ago

The root cause seems to be only when publishing/exporting a ZFS backed container. Receiving/importing into a ZFS pool from another driver preserves the ACLs. The other storage drivers are not affected.

jimi3 commented 1 year ago

Used zfs myself when testing.

roosterfish commented 1 year ago

Thanks for confirming @jimi3. It boils down to if a container has the volatile.last_state.idmap config key set (which is the case if ZFS backed), then it runs the idmap.UnshiftACL() function on each file that has ACLs set which probably causes this behavior.