Closed enderson-pan closed 4 years ago
@freeekanayaka
I tested again. Sometimes I can launch the container successfully, however, somtimes failed after trying several times. Finally, it failed. And I found maybe something wrong with the load banlance of the cluster? Because the machine5 never be used although it seems operational in the cluster.
Can you try running lxc list
from each of the nodes?
For node 1~3, all containers seem fine, however, in node 4 some containers look fine, in node 5 none looks fine.
And also I found the cluster status was wrong in node 4 and node 5.
The error node4 lxd.buginfo: holytiny@machine4:~$ sudo lxd.buginfo
name: lxd
summary: System container manager and API
publisher: Canonical✓
store-url: https://snapcraft.io/lxd
contact: https://github.com/lxc/lxd/issues
license: unset
description: |
**LXD is a system container manager**
With LXD you can run hundreds of containers of a variety of Linux
distributions, apply resource limits, pass in directories, USB devices
or GPUs and setup any network and storage you want.
LXD containers are lightweight, secure by default and a great
alternative to running Linux virtual machines.
**Run any Linux distribution you want**
Pre-made images are available for Ubuntu, Alpine Linux, ArchLinux,
CentOS, Debian, Fedora, Gentoo, OpenSUSE and more.
A full list of available images can be found here: https://images.linuxcontainers.org
Can't find the distribution you want? It's easy to make your own images too, either using our
`distrobuilder` tool or by assembling your own image tarball by hand.
**Containers at scale**
LXD is network aware and all interactions go through a simple REST API,
making it possible to remotely interact with containers on remote
systems, copying and moving them as you wish.
Want to go big? LXD also has built-in clustering support,
letting you turn dozens of servers into one big LXD server.
**Configuration options**
Supported options for the LXD snap (`snap set lxd KEY=VALUE`):
- criu.enable: Enable experimental live-migration support [default=false]
- daemon.debug: Increases logging to debug level [default=false]
- daemon.group: Group of users that can interact with LXD [default=lxd]
- ceph.builtin: Use snap-specific ceph configuration [default=false]
- openvswitch.builtin: Run a snap-specific OVS daemon [default=false]
Documentation: https://lxd.readthedocs.io
commands:
- lxd.benchmark
- lxd.buginfo
- lxd.check-kernel
- lxd.lxc
- lxd
- lxd.migrate
services:
lxd.activate: oneshot, enabled, inactive
lxd.daemon: simple, enabled, active
snap-id: J60k4JY0HppjwOjW8dZdYc8obXKxujRu
tracking: latest/stable
refresh-date: today at 09:07 CST
channels:
latest/stable: 4.0.0 2020-04-17 (14709) 62MB -
latest/candidate: 4.0.0 2020-04-17 (14709) 62MB -
latest/beta: ↑
latest/edge: git-cc06a9a 2020-04-17 (14719) 62MB -
4.0/stable: 4.0.0 2020-04-17 (14709) 62MB -
4.0/candidate: 4.0.0 2020-04-17 (14709) 62MB -
4.0/beta: ↑
4.0/edge: ↑
3.23/stable: 3.23 2020-03-30 (14133) 70MB -
3.23/candidate: 3.23 2020-03-30 (14133) 70MB -
3.23/beta: ↑
3.23/edge: ↑
3.22/stable: 3.22 2020-03-18 (13901) 70MB -
3.22/candidate: 3.22 2020-03-19 (13911) 70MB -
3.22/beta: ↑
3.22/edge: ↑
3.21/stable: 3.21 2020-02-24 (13522) 69MB -
3.21/candidate: 3.21 2020-03-04 (13588) 69MB -
3.21/beta: ↑
3.21/edge: ↑
3.20/stable: 3.20 2020-02-06 (13300) 69MB -
3.20/candidate: 3.20 2020-02-06 (13300) 69MB -
3.20/beta: ↑
3.20/edge: ↑
3.19/stable: 3.19 2020-01-27 (13162) 67MB -
3.19/candidate: 3.19 2020-01-27 (13162) 67MB -
3.19/beta: ↑
3.19/edge: ↑
3.18/stable: 3.18 2019-12-02 (12631) 57MB -
3.18/candidate: 3.18 2019-12-02 (12631) 57MB -
3.18/beta: ↑
3.18/edge: ↑
3.0/stable: 3.0.4 2019-10-10 (11348) 55MB -
3.0/candidate: 3.0.4 2019-10-10 (11348) 55MB -
3.0/beta: ↑
3.0/edge: git-81b81b9 2019-10-10 (11362) 55MB -
2.0/stable: 2.0.11 2019-10-10 (8023) 28MB -
2.0/candidate: 2.0.11 2019-10-10 (8023) 28MB -
2.0/beta: ↑
2.0/edge: git-160221d 2020-01-13 (12854) 27MB -
installed: 4.0.0 (14709) 62MB -
config:
cluster.https_address: 172.26.140.104:8443
core.https_address: 172.26.140.104:8443
core.trust_password: true
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- resources_system
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
addresses:
- 172.26.140.104:8443
architectures:
- x86_64
- i686
certificate: |
-----BEGIN CERTIFICATE-----
MIICCjCCAZCgAwIBAgIRAOC8PbDgCl1kfkP5Xt8ImKgwCgYIKoZIzj0EAwMwNjEc
MBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEWMBQGA1UEAwwNcm9vdEBtYWNo
aW5lMTAeFw0yMDAzMzExMzAyMDVaFw0zMDAzMjkxMzAyMDVaMDYxHDAaBgNVBAoT
E2xpbnV4Y29udGFpbmVycy5vcmcxFjAUBgNVBAMMDXJvb3RAbWFjaGluZTEwdjAQ
BgcqhkjOPQIBBgUrgQQAIgNiAATsEFeixQD1UvD54y2VEaf2ssxbcf07U07ptK3n
1064CoxMQn+mnynybkXbRShSRTihWuQGfTuDbsLlwZcb3YNmi+o8vIbOMmMGWewi
BNmu6P/YWvkyvZNciCGfrm4FY1ajYjBgMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUE
DDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMCsGA1UdEQQkMCKCCG1hY2hpbmUx
hwR/AAABhxAAAAAAAAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2gAMGUCMAlF/tlz
taJCVmIUXk17wWR4s0aPSaTxM/sC+BLIHvX6tZ/4/ZM6clBEMQs9FiUFGQIxAJ3n
YefP+WzW62uVmYqHXxvIjrWDnAN+uMH+MHSABk9iqscO+sR8rMjNyF327Eg3hw==
-----END CERTIFICATE-----
certificate_fingerprint: 950458db1cfd70b1c1ebe827718d5a18cc1b65eb4d32d2d0f7f76c8e45cbdbeb
driver: lxc
driver_version: 4.0.2
firewall: xtables
kernel: Linux
kernel_architecture: x86_64
kernel_features:
netnsid_getifaddrs: "false"
seccomp_listener: "false"
seccomp_listener_continue: "false"
shiftfs: "false"
uevent_injection: "false"
unpriv_fscaps: "true"
kernel_version: 4.15.0-88-generic
lxc_features:
cgroup2: "true"
mount_injection_file: "true"
network_gateway_device_route: "true"
network_ipvlan: "true"
network_l2proxy: "true"
network_phys_macvlan_mtu: "true"
network_veth_router: "true"
seccomp_notify: "true"
os_name: Ubuntu
os_version: "18.04"
project: default
server: lxd
server_clustered: true
server_name: machine4
server_pid: 14986
server_version: 4.0.0
storage: dir
storage_version: "1"
+------+-------+------+------+------+-----------+----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
+------+-------+------+------+------+-----------+----------+
+--------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+--------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
| ubuntu-18.04 | 2cfc5a5567b8 | no | ubuntu 18.04 LTS amd64 (release) (20200407) | x86_64 | CONTAINER | 179.02MB | Apr 18, 2020 at 1:21am (UTC) |
+--------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
+-------+-------------+--------+---------+---------+
| NAME | DESCRIPTION | DRIVER | STATE | USED BY |
+-------+-------------+--------+---------+---------+
| local | | dir | CREATED | 1 |
+-------+-------------+--------+---------+---------+
+---------+----------+---------+-------------+---------+---------+
| NAME | TYPE | MANAGED | DESCRIPTION | USED BY | STATE |
+---------+----------+---------+-------------+---------+---------+
| eth0 | physical | NO | | 0 | |
+---------+----------+---------+-------------+---------+---------+
| lxdfan0 | bridge | YES | | 0 | CREATED |
+---------+----------+---------+-------------+---------+---------+
+-------------------+--------+----------+-----------------+---------+
| NAME | IMAGES | PROFILES | STORAGE VOLUMES | USED BY |
+-------------------+--------+----------+-----------------+---------+
| default (current) | YES | YES | YES | 2 |
+-------------------+--------+----------+-----------------+---------+
+---------+---------+
| NAME | USED BY |
+---------+---------+
| default | 0 |
+---------+---------+
config: {}
description: Default LXD profile
devices:
eth0:
name: eth0
network: lxdfan0
type: nic
root:
path: /
pool: local
type: disk
name: default
used_by: []
+----------+-----------------------------+----------+---------+----------------------------------+--------------+
| NAME | URL | DATABASE | STATE | MESSAGE | ARCHITECTURE |
+----------+-----------------------------+----------+---------+----------------------------------+--------------+
| machine1 | https://172.26.140.101:8443 | YES | OFFLINE | no heartbeat since 28.528447467s | x86_64 |
+----------+-----------------------------+----------+---------+----------------------------------+--------------+
| machine2 | https://172.26.140.102:8443 | YES | OFFLINE | no heartbeat since 28.528837434s | x86_64 |
+----------+-----------------------------+----------+---------+----------------------------------+--------------+
| machine3 | https://172.26.140.103:8443 | YES | OFFLINE | no heartbeat since 28.528712793s | x86_64 |
+----------+-----------------------------+----------+---------+----------------------------------+--------------+
| machine4 | https://172.26.140.104:8443 | NO | OFFLINE | no heartbeat since 28.528621334s | x86_64 |
+----------+-----------------------------+----------+---------+----------------------------------+--------------+
| machine5 | https://172.26.140.105:8443 | NO | OFFLINE | no heartbeat since 28.528534232s | x86_64 |
+----------+-----------------------------+----------+---------+----------------------------------+--------------+
[1507592.156521] kauditd_printk_skb: 6 callbacks suppressed
[1507592.156522] audit: type=1400 audit(1587172676.108:236): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd-test4_</var/snap/lxd/common/lxd>" pid=15473 comm="apparmor_parser"
[1508274.307933] lxdfan0: port 3(veth06c3f04f) entered blocking state
[1508274.307935] lxdfan0: port 3(veth06c3f04f) entered disabled state
[1508274.307997] device veth06c3f04f entered promiscuous mode
[1508274.585926] audit: type=1400 audit(1587173358.561:237): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-test4_</var/snap/lxd/common/lxd>" pid=15573 comm="apparmor_parser"
[1508274.650266] eth0: renamed from veth6a9f05c7
[1508274.665569] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[1508274.667393] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[1508274.667435] lxdfan0: port 3(veth06c3f04f) entered blocking state
[1508274.667437] lxdfan0: port 3(veth06c3f04f) entered forwarding state
[1508275.437370] audit: type=1400 audit(1587173359.413:238): apparmor="STATUS" operation="profile_load" label="lxd-test4_</var/snap/lxd/common/lxd>//&:lxd-test4_<var-snap-lxd-common-lxd>:unconfined" name="/sbin/dhclient" pid=15862 comm="apparmor_parser"
[1508275.440018] audit: type=1400 audit(1587173359.413:239): apparmor="STATUS" operation="profile_load" label="lxd-test4_</var/snap/lxd/common/lxd>//&:lxd-test4_<var-snap-lxd-common-lxd>:unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=15862 comm="apparmor_parser"
[1508275.442241] audit: type=1400 audit(1587173359.417:240): apparmor="STATUS" operation="profile_load" label="lxd-test4_</var/snap/lxd/common/lxd>//&:lxd-test4_<var-snap-lxd-common-lxd>:unconfined" name="/usr/lib/NetworkManager/nm-dhcp-helper" pid=15862 comm="apparmor_parser"
[1508275.442908] audit: type=1400 audit(1587173359.417:241): apparmor="STATUS" operation="profile_load" label="lxd-test4_</var/snap/lxd/common/lxd>//&:lxd-test4_<var-snap-lxd-common-lxd>:unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=15862 comm="apparmor_parser"
[1508275.461698] audit: type=1400 audit(1587173359.437:242): apparmor="STATUS" operation="profile_load" label="lxd-test4_</var/snap/lxd/common/lxd>//&:lxd-test4_<var-snap-lxd-common-lxd>:unconfined" name="/usr/bin/lxc-start" pid=15870 comm="apparmor_parser"
[1508275.548967] audit: type=1400 audit(1587173359.525:243): apparmor="STATUS" operation="profile_load" label="lxd-test4_</var/snap/lxd/common/lxd>//&:lxd-test4_<var-snap-lxd-common-lxd>:unconfined" name="lxc-container-default" pid=15861 comm="apparmor_parser"
[1508275.549796] audit: type=1400 audit(1587173359.525:244): apparmor="STATUS" operation="profile_load" label="lxd-test4_</var/snap/lxd/common/lxd>//&:lxd-test4_<var-snap-lxd-common-lxd>:unconfined" name="lxc-container-default-cgns" pid=15861 comm="apparmor_parser"
[1508275.550576] audit: type=1400 audit(1587173359.525:245): apparmor="STATUS" operation="profile_load" label="lxd-test4_</var/snap/lxd/common/lxd>//&:lxd-test4_<var-snap-lxd-common-lxd>:unconfined" name="lxc-container-default-with-mounting" pid=15861 comm="apparmor_parser"
[1508275.551377] audit: type=1400 audit(1587173359.525:246): apparmor="STATUS" operation="profile_load" label="lxd-test4_</var/snap/lxd/common/lxd>//&:lxd-test4_<var-snap-lxd-common-lxd>:unconfined" name="lxc-container-default-with-nesting" pid=15861 comm="apparmor_parser"
[1508531.729631] lxdfan0: port 4(veth29615663) entered blocking state
[1508531.729634] lxdfan0: port 4(veth29615663) entered disabled state
[1508531.729708] device veth29615663 entered promiscuous mode
[1508532.171578] kauditd_printk_skb: 6 callbacks suppressed
[1508532.171579] audit: type=1400 audit(1587173616.153:253): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-test8_</var/snap/lxd/common/lxd>" pid=16543 comm="apparmor_parser"
[1508532.272848] eth0: renamed from vethb4fe12f1
[1508532.297511] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[1508532.299553] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[1508532.299583] lxdfan0: port 4(veth29615663) entered blocking state
[1508532.299586] lxdfan0: port 4(veth29615663) entered forwarding state
[1508533.081033] audit: type=1400 audit(1587173617.066:254): apparmor="STATUS" operation="profile_load" label="lxd-test8_</var/snap/lxd/common/lxd>//&:lxd-test8_<var-snap-lxd-common-lxd>:unconfined" name="/sbin/dhclient" pid=16839 comm="apparmor_parser"
[1508533.082234] audit: type=1400 audit(1587173617.066:255): apparmor="STATUS" operation="profile_load" label="lxd-test8_</var/snap/lxd/common/lxd>//&:lxd-test8_<var-snap-lxd-common-lxd>:unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=16839 comm="apparmor_parser"
[1508533.082957] audit: type=1400 audit(1587173617.066:256): apparmor="STATUS" operation="profile_load" label="lxd-test8_</var/snap/lxd/common/lxd>//&:lxd-test8_<var-snap-lxd-common-lxd>:unconfined" name="/usr/lib/NetworkManager/nm-dhcp-helper" pid=16839 comm="apparmor_parser"
[1508533.083619] audit: type=1400 audit(1587173617.066:257): apparmor="STATUS" operation="profile_load" label="lxd-test8_</var/snap/lxd/common/lxd>//&:lxd-test8_<var-snap-lxd-common-lxd>:unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=16839 comm="apparmor_parser"
[1508533.109143] audit: type=1400 audit(1587173617.094:258): apparmor="STATUS" operation="profile_load" label="lxd-test8_</var/snap/lxd/common/lxd>//&:lxd-test8_<var-snap-lxd-common-lxd>:unconfined" name="/usr/bin/lxc-start" pid=16846 comm="apparmor_parser"
[1508533.220254] audit: type=1400 audit(1587173617.202:259): apparmor="STATUS" operation="profile_load" label="lxd-test8_</var/snap/lxd/common/lxd>//&:lxd-test8_<var-snap-lxd-common-lxd>:unconfined" name="/usr/bin/man" pid=16850 comm="apparmor_parser"
[1508533.221301] audit: type=1400 audit(1587173617.206:260): apparmor="STATUS" operation="profile_load" label="lxd-test8_</var/snap/lxd/common/lxd>//&:lxd-test8_<var-snap-lxd-common-lxd>:unconfined" name="man_filter" pid=16850 comm="apparmor_parser"
[1508533.221938] audit: type=1400 audit(1587173617.206:261): apparmor="STATUS" operation="profile_load" label="lxd-test8_</var/snap/lxd/common/lxd>//&:lxd-test8_<var-snap-lxd-common-lxd>:unconfined" name="man_groff" pid=16850 comm="apparmor_parser"
[1508533.229241] audit: type=1400 audit(1587173617.214:262): apparmor="STATUS" operation="profile_load" label="lxd-test8_</var/snap/lxd/common/lxd>//&:lxd-test8_<var-snap-lxd-common-lxd>:unconfined" name="lxc-container-default" pid=16825 comm="apparmor_parser"
[1536219.877006] lxdfan0: port 3(veth06c3f04f) entered disabled state
[1536219.961853] lxdfan0: port 3(veth06c3f04f) entered disabled state
[1536219.962542] device veth06c3f04f left promiscuous mode
[1536219.962545] lxdfan0: port 3(veth06c3f04f) entered disabled state
[1536220.773049] kauditd_printk_skb: 6 callbacks suppressed
[1536220.773050] audit: type=1400 audit(1587201305.627:269): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd-test4_</var/snap/lxd/common/lxd>" pid=32280 comm="apparmor_parser"
[1536229.213920] lxdfan0: port 4(veth29615663) entered disabled state
[1536229.293751] lxdfan0: port 4(veth29615663) entered disabled state
[1536229.298518] device veth29615663 left promiscuous mode
[1536229.298521] lxdfan0: port 4(veth29615663) entered disabled state
[1536230.093713] audit: type=1400 audit(1587201314.947:270): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd-test8_</var/snap/lxd/common/lxd>" pid=32449 comm="apparmor_parser"
t=2020-04-18T09:07:13+0800 lvl=info msg="Pruning expired images"
t=2020-04-18T09:07:13+0800 lvl=info msg="Done pruning expired images"
t=2020-04-18T09:07:13+0800 lvl=info msg="Pruning expired instance backups"
t=2020-04-18T09:07:13+0800 lvl=info msg="Done pruning expired instance backups"
t=2020-04-18T09:07:13+0800 lvl=info msg="Expiring log files"
t=2020-04-18T09:07:13+0800 lvl=info msg="Done expiring log files"
t=2020-04-18T09:07:13+0800 lvl=info msg="Updating instance types"
t=2020-04-18T09:07:13+0800 lvl=info msg="Updating images"
t=2020-04-18T09:07:13+0800 lvl=info msg="Done updating instance types"
t=2020-04-18T09:07:13+0800 lvl=info msg="Done updating images"
t=2020-04-18T09:07:21+0800 lvl=info msg="Refreshing forkdns peers for lxdfan0"
t=2020-04-18T09:07:22+0800 lvl=info msg="Updated forkdns server list for 'lxdfan0': [240.105.0.1 240.101.0.1 240.102.0.1 240.103.0.1]"
t=2020-04-18T09:17:54+0800 lvl=info msg="Shutting down container" action=shutdown created=2020-04-17T22:41:03+0800 ephemeral=false name=test4 project=default timeout=-1s used=2020-04-17T23:57:36+0800
t=2020-04-18T09:17:56+0800 lvl=info msg="Shut down container" action=shutdown created=2020-04-17T22:41:03+0800 ephemeral=false name=test4 project=default timeout=-1s used=2020-04-17T23:57:36+0800
t=2020-04-18T09:17:56+0800 lvl=info msg="Deleting container" created=2020-04-17T22:41:03+0800 ephemeral=false name=test4 project=default used=2020-04-17T23:57:36+0800
t=2020-04-18T09:17:57+0800 lvl=info msg="Deleted container" created=2020-04-17T22:41:03+0800 ephemeral=false name=test4 project=default used=2020-04-17T23:57:36+0800
t=2020-04-18T09:29:04+0800 lvl=info msg="Creating container" ephemeral=false name=test4 project=default
t=2020-04-18T09:29:04+0800 lvl=info msg="Created container" ephemeral=false name=test4 project=default
t=2020-04-18T09:29:18+0800 lvl=info msg="Starting container" action=start created=2020-04-18T09:29:04+0800 ephemeral=false name=test4 project=default stateful=false used=1970-01-01T08:00:00+0800
t=2020-04-18T09:29:18+0800 lvl=info msg="Started container" action=start created=2020-04-18T09:29:04+0800 ephemeral=false name=test4 project=default stateful=false used=1970-01-01T08:00:00+0800
t=2020-04-18T09:33:21+0800 lvl=info msg="Creating container" ephemeral=false name=test8 project=default
t=2020-04-18T09:33:21+0800 lvl=info msg="Created container" ephemeral=false name=test8 project=default
t=2020-04-18T09:33:35+0800 lvl=info msg="Starting container" action=start created=2020-04-18T09:33:21+0800 ephemeral=false name=test8 project=default stateful=false used=1970-01-01T08:00:00+0800
t=2020-04-18T09:33:36+0800 lvl=info msg="Started container" action=start created=2020-04-18T09:33:21+0800 ephemeral=false name=test8 project=default stateful=false used=1970-01-01T08:00:00+0800
t=2020-04-18T10:07:13+0800 lvl=info msg="Pruning expired instance backups"
t=2020-04-18T10:07:13+0800 lvl=info msg="Done pruning expired instance backups"
t=2020-04-18T11:07:13+0800 lvl=info msg="Pruning expired instance backups"
t=2020-04-18T11:07:13+0800 lvl=info msg="Done pruning expired instance backups"
t=2020-04-18T12:07:13+0800 lvl=info msg="Pruning expired instance backups"
t=2020-04-18T12:07:13+0800 lvl=info msg="Done pruning expired instance backups"
t=2020-04-18T13:07:13+0800 lvl=info msg="Pruning expired instance backups"
t=2020-04-18T13:07:13+0800 lvl=info msg="Done pruning expired instance backups"
t=2020-04-18T14:07:13+0800 lvl=info msg="Pruning expired instance backups"
t=2020-04-18T14:07:13+0800 lvl=info msg="Done pruning expired instance backups"
t=2020-04-18T15:07:13+0800 lvl=info msg="Pruning expired instance backups"
t=2020-04-18T15:07:13+0800 lvl=info msg="Done pruning expired instance backups"
t=2020-04-18T15:07:13+0800 lvl=info msg="Updating images"
t=2020-04-18T15:07:13+0800 lvl=info msg="Done updating images"
t=2020-04-18T16:07:13+0800 lvl=info msg="Pruning expired instance backups"
t=2020-04-18T16:07:13+0800 lvl=info msg="Done pruning expired instance backups"
t=2020-04-18T17:07:13+0800 lvl=info msg="Pruning expired instance backups"
t=2020-04-18T17:07:13+0800 lvl=info msg="Done pruning expired instance backups"
t=2020-04-18T17:15:04+0800 lvl=info msg="Shutting down container" action=shutdown created=2020-04-18T09:29:04+0800 ephemeral=false name=test4 project=default timeout=-1s used=2020-04-18T09:29:18+0800
t=2020-04-18T17:15:05+0800 lvl=info msg="Shut down container" action=shutdown created=2020-04-18T09:29:04+0800 ephemeral=false name=test4 project=default timeout=-1s used=2020-04-18T09:29:18+0800
t=2020-04-18T17:15:05+0800 lvl=info msg="Deleting container" created=2020-04-18T09:29:04+0800 ephemeral=false name=test4 project=default used=2020-04-18T09:29:18+0800
t=2020-04-18T17:15:06+0800 lvl=info msg="Deleted container" created=2020-04-18T09:29:04+0800 ephemeral=false name=test4 project=default used=2020-04-18T09:29:18+0800
t=2020-04-18T17:15:13+0800 lvl=info msg="Shutting down container" action=shutdown created=2020-04-18T09:33:21+0800 ephemeral=false name=test8 project=default timeout=-1s used=2020-04-18T09:33:36+0800
t=2020-04-18T17:15:14+0800 lvl=info msg="Shut down container" action=shutdown created=2020-04-18T09:33:21+0800 ephemeral=false name=test8 project=default timeout=-1s used=2020-04-18T09:33:36+0800
t=2020-04-18T17:15:15+0800 lvl=info msg="Deleting container" created=2020-04-18T09:33:21+0800 ephemeral=false name=test8 project=default used=2020-04-18T09:33:36+0800
t=2020-04-18T17:15:15+0800 lvl=info msg="Deleted container" created=2020-04-18T09:33:21+0800 ephemeral=false name=test8 project=default used=2020-04-18T09:33:36+0800
-- Logs begin at Wed 2019-12-25 16:19:53 CST, end at Sat 2020-04-18 17:17:10 CST. --
Apr 17 23:57:33 machine4 lxd.daemon[12384]: 1: fd: 6: name=systemd
Apr 17 23:57:33 machine4 lxd.daemon[12384]: 2: fd: 7: net_cls,net_prio
Apr 17 23:57:33 machine4 lxd.daemon[12384]: 3: fd: 8: blkio
Apr 17 23:57:33 machine4 lxd.daemon[12384]: 4: fd: 9: cpuset
Apr 17 23:57:33 machine4 lxd.daemon[12384]: 5: fd: 10: rdma
Apr 17 23:57:33 machine4 lxd.daemon[12384]: 6: fd: 11: perf_event
Apr 17 23:57:33 machine4 lxd.daemon[12384]: 7: fd: 12: memory
Apr 17 23:57:33 machine4 lxd.daemon[12384]: 8: fd: 13: freezer
Apr 17 23:57:33 machine4 lxd.daemon[12384]: 9: fd: 14: hugetlb
Apr 17 23:57:33 machine4 lxd.daemon[12384]: 10: fd: 15: cpu,cpuacct
Apr 17 23:57:33 machine4 lxd.daemon[12384]: 11: fd: 16: devices
Apr 17 23:57:33 machine4 lxd.daemon[12384]: 12: fd: 17: pids
Apr 17 23:57:33 machine4 lxd.daemon[12384]: api_extensions:
Apr 17 23:57:33 machine4 lxd.daemon[12384]: - cgroups
Apr 17 23:57:33 machine4 lxd.daemon[12384]: - sys_cpu_online
Apr 17 23:57:33 machine4 lxd.daemon[12384]: - proc_cpuinfo
Apr 17 23:57:33 machine4 lxd.daemon[12384]: - proc_diskstats
Apr 17 23:57:33 machine4 lxd.daemon[12384]: - proc_loadavg
Apr 17 23:57:33 machine4 lxd.daemon[12384]: - proc_meminfo
Apr 17 23:57:33 machine4 lxd.daemon[12384]: - proc_stat
Apr 17 23:57:33 machine4 lxd.daemon[12384]: - proc_swaps
Apr 17 23:57:33 machine4 lxd.daemon[12384]: - proc_uptime
Apr 17 23:57:33 machine4 lxd.daemon[12384]: - shared_pidns
Apr 17 23:57:33 machine4 lxd.daemon[12384]: - cpuview_daemon
Apr 17 23:57:33 machine4 lxd.daemon[12384]: - loadavg_daemon
Apr 17 23:57:33 machine4 lxd.daemon[12384]: - pidfds
Apr 18 09:07:06 machine4 systemd[1]: Stopping Service for snap application lxd.daemon...
Apr 18 09:07:06 machine4 lxd.daemon[14688]: => Stop reason is: snap refresh
Apr 18 09:07:06 machine4 lxd.daemon[14688]: => Stopping LXD
Apr 18 09:07:07 machine4 systemd[1]: Stopped Service for snap application lxd.daemon.
Apr 18 09:07:11 machine4 systemd[1]: Started Service for snap application lxd.daemon.
Apr 18 09:07:11 machine4 lxd.daemon[14883]: => Preparing the system (14709)
Apr 18 09:07:11 machine4 lxd.daemon[14883]: ==> Loading snap configuration
Apr 18 09:07:11 machine4 lxd.daemon[14883]: ==> Setting up mntns symlink (mnt:[4026532233])
Apr 18 09:07:11 machine4 lxd.daemon[14883]: ==> Setting up kmod wrapper
Apr 18 09:07:11 machine4 lxd.daemon[14883]: ==> Preparing /boot
Apr 18 09:07:11 machine4 lxd.daemon[14883]: ==> Preparing a clean copy of /run
Apr 18 09:07:11 machine4 lxd.daemon[14883]: ==> Preparing a clean copy of /etc
Apr 18 09:07:11 machine4 lxd.daemon[14883]: ==> Setting up ceph configuration
Apr 18 09:07:11 machine4 lxd.daemon[14883]: ==> Setting up LVM configuration
Apr 18 09:07:11 machine4 lxd.daemon[14883]: ==> Rotating logs
Apr 18 09:07:11 machine4 lxd.daemon[14883]: ==> Setting up ZFS (0.7)
Apr 18 09:07:11 machine4 lxd.daemon[14883]: ==> Escaping the systemd cgroups
Apr 18 09:07:11 machine4 lxd.daemon[14883]: ====> Detected cgroup V1
Apr 18 09:07:11 machine4 lxd.daemon[14883]: ==> Escaping the systemd process resource limits
Apr 18 09:07:11 machine4 lxd.daemon[14883]: ==> Disabling shiftfs on this kernel (auto)
Apr 18 09:07:11 machine4 lxd.daemon[14883]: => Re-using existing LXCFS
Apr 18 09:07:11 machine4 lxd.daemon[14883]: => Starting LXD
Apr 18 09:07:11 machine4 lxd.daemon[14883]: t=2020-04-18T09:07:11+0800 lvl=warn msg=" - Couldn't find the CGroup memory swap accounting, swap limits will be ignored"
Apr 18 09:07:11 machine4 lxd.daemon[14883]: t=2020-04-18T09:07:11+0800 lvl=warn msg="Dqlite: server unavailable err=failed to establish network connection: 503 Service Unavailable address=172.26.140.101:8443 attempt=0"
The error node5 lxd.debuginfo: holytiny@machine5:~$ sudo lxd.buginfo [sudo] password for holytiny:
name: lxd
summary: System container manager and API
publisher: Canonical✓
store-url: https://snapcraft.io/lxd
contact: https://github.com/lxc/lxd/issues
license: unset
description: |
**LXD is a system container manager**
With LXD you can run hundreds of containers of a variety of Linux
distributions, apply resource limits, pass in directories, USB devices
or GPUs and setup any network and storage you want.
LXD containers are lightweight, secure by default and a great
alternative to running Linux virtual machines.
**Run any Linux distribution you want**
Pre-made images are available for Ubuntu, Alpine Linux, ArchLinux,
CentOS, Debian, Fedora, Gentoo, OpenSUSE and more.
A full list of available images can be found here: https://images.linuxcontainers.org
Can't find the distribution you want? It's easy to make your own images too, either using our
`distrobuilder` tool or by assembling your own image tarball by hand.
**Containers at scale**
LXD is network aware and all interactions go through a simple REST API,
making it possible to remotely interact with containers on remote
systems, copying and moving them as you wish.
Want to go big? LXD also has built-in clustering support,
letting you turn dozens of servers into one big LXD server.
**Configuration options**
Supported options for the LXD snap (`snap set lxd KEY=VALUE`):
- criu.enable: Enable experimental live-migration support [default=false]
- daemon.debug: Increases logging to debug level [default=false]
- daemon.group: Group of users that can interact with LXD [default=lxd]
- ceph.builtin: Use snap-specific ceph configuration [default=false]
- openvswitch.builtin: Run a snap-specific OVS daemon [default=false]
Documentation: https://lxd.readthedocs.io
commands:
- lxd.benchmark
- lxd.buginfo
- lxd.check-kernel
- lxd.lxc
- lxd
- lxd.migrate
services:
lxd.activate: oneshot, enabled, inactive
lxd.daemon: simple, enabled, active
snap-id: J60k4JY0HppjwOjW8dZdYc8obXKxujRu
tracking: latest/stable
refresh-date: today at 08:16 CST
channels:
latest/stable: 4.0.0 2020-04-17 (14709) 62MB -
latest/candidate: 4.0.0 2020-04-17 (14709) 62MB -
latest/beta: ↑
latest/edge: git-cc06a9a 2020-04-17 (14719) 62MB -
4.0/stable: 4.0.0 2020-04-17 (14709) 62MB -
4.0/candidate: 4.0.0 2020-04-17 (14709) 62MB -
4.0/beta: ↑
4.0/edge: ↑
3.23/stable: 3.23 2020-03-30 (14133) 70MB -
3.23/candidate: 3.23 2020-03-30 (14133) 70MB -
3.23/beta: ↑
3.23/edge: ↑
3.22/stable: 3.22 2020-03-18 (13901) 70MB -
3.22/candidate: 3.22 2020-03-19 (13911) 70MB -
3.22/beta: ↑
3.22/edge: ↑
3.21/stable: 3.21 2020-02-24 (13522) 69MB -
3.21/candidate: 3.21 2020-03-04 (13588) 69MB -
3.21/beta: ↑
3.21/edge: ↑
3.20/stable: 3.20 2020-02-06 (13300) 69MB -
3.20/candidate: 3.20 2020-02-06 (13300) 69MB -
3.20/beta: ↑
3.20/edge: ↑
3.19/stable: 3.19 2020-01-27 (13162) 67MB -
3.19/candidate: 3.19 2020-01-27 (13162) 67MB -
3.19/beta: ↑
3.19/edge: ↑
3.18/stable: 3.18 2019-12-02 (12631) 57MB -
3.18/candidate: 3.18 2019-12-02 (12631) 57MB -
3.18/beta: ↑
3.18/edge: ↑
3.0/stable: 3.0.4 2019-10-10 (11348) 55MB -
3.0/candidate: 3.0.4 2019-10-10 (11348) 55MB -
3.0/beta: ↑
3.0/edge: git-81b81b9 2019-10-10 (11362) 55MB -
2.0/stable: 2.0.11 2019-10-10 (8023) 28MB -
2.0/candidate: 2.0.11 2019-10-10 (8023) 28MB -
2.0/beta: ↑
2.0/edge: git-160221d 2020-01-13 (12854) 27MB -
installed: 4.0.0 (14709) 62MB -
config:
cluster.https_address: 172.26.140.105:8443
core.https_address: 172.26.140.105:8443
core.trust_password: true
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- resources_system
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
addresses:
- 172.26.140.105:8443
architectures:
- x86_64
- i686
certificate: |
-----BEGIN CERTIFICATE-----
MIICCjCCAZCgAwIBAgIRAOC8PbDgCl1kfkP5Xt8ImKgwCgYIKoZIzj0EAwMwNjEc
MBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEWMBQGA1UEAwwNcm9vdEBtYWNo
aW5lMTAeFw0yMDAzMzExMzAyMDVaFw0zMDAzMjkxMzAyMDVaMDYxHDAaBgNVBAoT
E2xpbnV4Y29udGFpbmVycy5vcmcxFjAUBgNVBAMMDXJvb3RAbWFjaGluZTEwdjAQ
BgcqhkjOPQIBBgUrgQQAIgNiAATsEFeixQD1UvD54y2VEaf2ssxbcf07U07ptK3n
1064CoxMQn+mnynybkXbRShSRTihWuQGfTuDbsLlwZcb3YNmi+o8vIbOMmMGWewi
BNmu6P/YWvkyvZNciCGfrm4FY1ajYjBgMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUE
DDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMCsGA1UdEQQkMCKCCG1hY2hpbmUx
hwR/AAABhxAAAAAAAAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2gAMGUCMAlF/tlz
taJCVmIUXk17wWR4s0aPSaTxM/sC+BLIHvX6tZ/4/ZM6clBEMQs9FiUFGQIxAJ3n
YefP+WzW62uVmYqHXxvIjrWDnAN+uMH+MHSABk9iqscO+sR8rMjNyF327Eg3hw==
-----END CERTIFICATE-----
certificate_fingerprint: 950458db1cfd70b1c1ebe827718d5a18cc1b65eb4d32d2d0f7f76c8e45cbdbeb
driver: lxc
driver_version: 4.0.2
firewall: xtables
kernel: Linux
kernel_architecture: x86_64
kernel_features:
netnsid_getifaddrs: "false"
seccomp_listener: "false"
seccomp_listener_continue: "false"
shiftfs: "false"
uevent_injection: "false"
unpriv_fscaps: "true"
kernel_version: 4.15.0-88-generic
lxc_features:
cgroup2: "true"
mount_injection_file: "true"
network_gateway_device_route: "true"
network_ipvlan: "true"
network_l2proxy: "true"
network_phys_macvlan_mtu: "true"
network_veth_router: "true"
seccomp_notify: "true"
os_name: Ubuntu
os_version: "18.04"
project: default
server: lxd
server_clustered: true
server_name: machine5
server_pid: 10300
server_version: 4.0.0
storage: dir
storage_version: "1"
+------+-------+------+------+------+-----------+----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | LOCATION |
+------+-------+------+------+------+-----------+----------+
+--------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+--------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
| ubuntu-18.04 | 2cfc5a5567b8 | no | ubuntu 18.04 LTS amd64 (release) (20200407) | x86_64 | CONTAINER | 179.02MB | Apr 18, 2020 at 1:21am (UTC) |
+--------------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
+-------+-------------+--------+---------+---------+
| NAME | DESCRIPTION | DRIVER | STATE | USED BY |
+-------+-------------+--------+---------+---------+
| local | | dir | CREATED | 1 |
+-------+-------------+--------+---------+---------+
+---------+----------+---------+-------------+---------+---------+
| NAME | TYPE | MANAGED | DESCRIPTION | USED BY | STATE |
+---------+----------+---------+-------------+---------+---------+
| eth0 | physical | NO | | 0 | |
+---------+----------+---------+-------------+---------+---------+
| lxdfan0 | bridge | YES | | 0 | CREATED |
+---------+----------+---------+-------------+---------+---------+
+-------------------+--------+----------+-----------------+---------+
| NAME | IMAGES | PROFILES | STORAGE VOLUMES | USED BY |
+-------------------+--------+----------+-----------------+---------+
| default (current) | YES | YES | YES | 2 |
+-------------------+--------+----------+-----------------+---------+
+---------+---------+
| NAME | USED BY |
+---------+---------+
| default | 0 |
+---------+---------+
config: {}
description: Default LXD profile
devices:
eth0:
name: eth0
network: lxdfan0
type: nic
root:
path: /
pool: local
type: disk
name: default
used_by: []
+----------+-----------------------------+----------+---------+----------------------------------+--------------+
| NAME | URL | DATABASE | STATE | MESSAGE | ARCHITECTURE |
+----------+-----------------------------+----------+---------+----------------------------------+--------------+
| machine1 | https://172.26.140.101:8443 | YES | OFFLINE | no heartbeat since 36.929122614s | x86_64 |
+----------+-----------------------------+----------+---------+----------------------------------+--------------+
| machine2 | https://172.26.140.102:8443 | YES | OFFLINE | no heartbeat since 36.928999973s | x86_64 |
+----------+-----------------------------+----------+---------+----------------------------------+--------------+
| machine3 | https://172.26.140.103:8443 | YES | OFFLINE | no heartbeat since 36.928909953s | x86_64 |
+----------+-----------------------------+----------+---------+----------------------------------+--------------+
| machine4 | https://172.26.140.104:8443 | NO | OFFLINE | no heartbeat since 36.928825085s | x86_64 |
+----------+-----------------------------+----------+---------+----------------------------------+--------------+
| machine5 | https://172.26.140.105:8443 | NO | OFFLINE | no heartbeat since 36.9287415s | x86_64 |
+----------+-----------------------------+----------+---------+----------------------------------+--------------+
[1503871.045718] audit: type=1400 audit(1587168983.880:178): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.hook.install" pid=8645 comm="apparmor_parser"
[1503871.065062] audit: type=1400 audit(1587168983.900:179): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.hook.remove" pid=8646 comm="apparmor_parser"
[1503874.177808] lxdfan0: port 1(lxdfan0-mtu) entered disabled state
[1503874.178870] device lxdfan0-mtu left promiscuous mode
[1503874.178874] lxdfan0: port 1(lxdfan0-mtu) entered disabled state
[1503874.216501] lxdfan0: port 2(lxdfan0-fan) entered disabled state
[1503874.217083] device lxdfan0-fan left promiscuous mode
[1503874.217086] lxdfan0: port 2(lxdfan0-fan) entered disabled state
[1503874.250094] lxdfan0: port 1(lxdfan0-mtu) entered blocking state
[1503874.250095] lxdfan0: port 1(lxdfan0-mtu) entered disabled state
[1503874.250153] device lxdfan0-mtu entered promiscuous mode
[1503874.250184] lxdfan0: port 1(lxdfan0-mtu) entered blocking state
[1503874.250185] lxdfan0: port 1(lxdfan0-mtu) entered forwarding state
[1503874.387937] lxdfan0: port 2(lxdfan0-fan) entered blocking state
[1503874.387939] lxdfan0: port 2(lxdfan0-fan) entered disabled state
[1503874.388011] device lxdfan0-fan entered promiscuous mode
[1503874.389762] lxdfan0: port 2(lxdfan0-fan) entered blocking state
[1503874.389764] lxdfan0: port 2(lxdfan0-fan) entered forwarding state
[1518115.721395] new mount options do not match the existing superblock, will be ignored
[1518116.714833] lxdfan0: port 1(lxdfan0-mtu) entered disabled state
[1518116.715727] device lxdfan0-mtu left promiscuous mode
[1518116.715730] lxdfan0: port 1(lxdfan0-mtu) entered disabled state
[1518116.745641] lxdfan0: port 2(lxdfan0-fan) entered disabled state
[1518116.750875] device lxdfan0-fan left promiscuous mode
[1518116.750879] lxdfan0: port 2(lxdfan0-fan) entered disabled state
[1518116.822833] lxdfan0: port 1(lxdfan0-mtu) entered blocking state
[1518116.822836] lxdfan0: port 1(lxdfan0-mtu) entered disabled state
[1518116.823189] device lxdfan0-mtu entered promiscuous mode
[1518116.823222] lxdfan0: port 1(lxdfan0-mtu) entered blocking state
[1518116.823224] lxdfan0: port 1(lxdfan0-mtu) entered forwarding state
[1518116.968021] lxdfan0: port 2(lxdfan0-fan) entered blocking state
[1518116.968023] lxdfan0: port 2(lxdfan0-fan) entered disabled state
[1518116.968276] device lxdfan0-fan entered promiscuous mode
[1518116.971214] lxdfan0: port 2(lxdfan0-fan) entered blocking state
[1518116.971216] lxdfan0: port 2(lxdfan0-fan) entered forwarding state
[1518149.803748] device lxdfan0-fan left promiscuous mode
[1518149.803785] lxdfan0: port 2(lxdfan0-fan) entered disabled state
[1518149.814939] device lxdfan0-mtu left promiscuous mode
[1518149.814956] lxdfan0: port 1(lxdfan0-mtu) entered disabled state
[1518151.998609] new mount options do not match the existing superblock, will be ignored
[1518153.951414] lxdfan0: port 1(lxdfan0-mtu) entered blocking state
[1518153.951417] lxdfan0: port 1(lxdfan0-mtu) entered disabled state
[1518153.951490] device lxdfan0-mtu entered promiscuous mode
[1518153.955170] lxdfan0: port 1(lxdfan0-mtu) entered blocking state
[1518153.955172] lxdfan0: port 1(lxdfan0-mtu) entered forwarding state
[1518154.020624] lxdfan0: port 2(lxdfan0-fan) entered blocking state
[1518154.020626] lxdfan0: port 2(lxdfan0-fan) entered disabled state
[1518154.020696] device lxdfan0-fan entered promiscuous mode
[1518154.031928] lxdfan0: port 2(lxdfan0-fan) entered blocking state
[1518154.031930] lxdfan0: port 2(lxdfan0-fan) entered forwarding state
t=2020-04-18T12:14:26+0800 lvl=info msg=" - g 0 0 4294967295"
t=2020-04-18T12:14:26+0800 lvl=info msg="Configured LXD uid/gid map:"
t=2020-04-18T12:14:26+0800 lvl=info msg=" - u 0 1000000 1000000000"
t=2020-04-18T12:14:26+0800 lvl=info msg=" - g 0 1000000 1000000000"
t=2020-04-18T12:14:26+0800 lvl=info msg="Kernel features:"
t=2020-04-18T12:14:26+0800 lvl=info msg=" - netnsid-based network retrieval: no"
t=2020-04-18T12:14:26+0800 lvl=info msg=" - uevent injection: no"
t=2020-04-18T12:14:26+0800 lvl=info msg=" - seccomp listener: no"
t=2020-04-18T12:14:26+0800 lvl=info msg=" - seccomp listener continue syscalls: no"
t=2020-04-18T12:14:26+0800 lvl=info msg=" - unprivileged file capabilities: yes"
t=2020-04-18T12:14:26+0800 lvl=info msg=" - cgroup layout: hybrid"
t=2020-04-18T12:14:26+0800 lvl=warn msg=" - Couldn't find the CGroup memory swap accounting, swap limits will be ignored"
t=2020-04-18T12:14:26+0800 lvl=info msg=" - shiftfs support: disabled"
t=2020-04-18T12:14:26+0800 lvl=info msg="Initializing local database"
t=2020-04-18T12:14:26+0800 lvl=info msg="Starting /dev/lxd handler:"
t=2020-04-18T12:14:26+0800 lvl=info msg=" - binding devlxd socket" socket=/var/snap/lxd/common/lxd/devlxd/sock
t=2020-04-18T12:14:26+0800 lvl=info msg="REST API daemon:"
t=2020-04-18T12:14:26+0800 lvl=info msg=" - binding Unix socket" inherited=true socket=/var/snap/lxd/common/lxd/unix.socket
t=2020-04-18T12:14:26+0800 lvl=info msg=" - binding TCP socket" socket=172.26.140.105:8443
t=2020-04-18T12:14:26+0800 lvl=info msg="Initializing global database"
t=2020-04-18T12:14:26+0800 lvl=warn msg="Dqlite: server unavailable err=failed to establish network connection: 503 Service Unavailable address=172.26.140.101:8443 attempt=0"
t=2020-04-18T12:14:26+0800 lvl=info msg="Firewall loaded driver \"xtables\""
t=2020-04-18T12:14:27+0800 lvl=info msg="Initializing storage pools"
t=2020-04-18T12:14:27+0800 lvl=info msg="Initializing daemon storage mounts"
t=2020-04-18T12:14:27+0800 lvl=info msg="Initializing networks"
t=2020-04-18T12:14:27+0800 lvl=info msg="Pruning leftover image files"
t=2020-04-18T12:14:27+0800 lvl=info msg="Done pruning leftover image files"
t=2020-04-18T12:14:27+0800 lvl=info msg="Loading daemon configuration"
t=2020-04-18T12:14:27+0800 lvl=info msg="Pruning expired images"
t=2020-04-18T12:14:27+0800 lvl=info msg="Done pruning expired images"
t=2020-04-18T12:14:27+0800 lvl=info msg="Pruning expired instance backups"
t=2020-04-18T12:14:27+0800 lvl=info msg="Done pruning expired instance backups"
t=2020-04-18T12:14:27+0800 lvl=info msg="Expiring log files"
t=2020-04-18T12:14:27+0800 lvl=info msg="Updating instance types"
t=2020-04-18T12:14:27+0800 lvl=info msg="Done updating instance types"
t=2020-04-18T12:14:27+0800 lvl=info msg="Done expiring log files"
t=2020-04-18T12:14:27+0800 lvl=info msg="Updating images"
t=2020-04-18T12:14:27+0800 lvl=info msg="Done updating images"
t=2020-04-18T12:14:34+0800 lvl=info msg="Refreshing forkdns peers for lxdfan0"
t=2020-04-18T12:14:34+0800 lvl=info msg="Updated forkdns server list for 'lxdfan0': [240.102.0.1 240.103.0.1 240.104.0.1 240.101.0.1]"
t=2020-04-18T13:14:27+0800 lvl=info msg="Pruning expired instance backups"
t=2020-04-18T13:14:27+0800 lvl=info msg="Done pruning expired instance backups"
t=2020-04-18T14:14:27+0800 lvl=info msg="Pruning expired instance backups"
t=2020-04-18T14:14:27+0800 lvl=info msg="Done pruning expired instance backups"
t=2020-04-18T15:14:27+0800 lvl=info msg="Pruning expired instance backups"
t=2020-04-18T15:14:27+0800 lvl=info msg="Done pruning expired instance backups"
t=2020-04-18T16:14:27+0800 lvl=info msg="Pruning expired instance backups"
t=2020-04-18T16:14:27+0800 lvl=info msg="Done pruning expired instance backups"
t=2020-04-18T17:14:27+0800 lvl=info msg="Pruning expired instance backups"
t=2020-04-18T17:14:27+0800 lvl=info msg="Done pruning expired instance backups"
-- Logs begin at Wed 2019-12-25 16:19:53 CST, end at Sat 2020-04-18 17:23:30 CST. --
Apr 18 12:14:25 machine5 lxd.daemon[10026]: => Cleaning up namespaces
Apr 18 12:14:25 machine5 lxd.daemon[10026]: => All done
Apr 18 12:14:25 machine5 systemd[1]: Stopped Service for snap application lxd.daemon.
Apr 18 12:14:25 machine5 systemd[1]: Started Service for snap application lxd.daemon.
Apr 18 12:14:25 machine5 lxd.daemon[10189]: => Preparing the system (14709)
Apr 18 12:14:25 machine5 lxd.daemon[10189]: ==> Loading snap configuration
Apr 18 12:14:25 machine5 lxd.daemon[10189]: ==> Setting up mntns symlink (mnt:[4026532233])
Apr 18 12:14:25 machine5 lxd.daemon[10189]: ==> Setting up kmod wrapper
Apr 18 12:14:25 machine5 lxd.daemon[10189]: ==> Preparing /boot
Apr 18 12:14:25 machine5 lxd.daemon[10189]: ==> Preparing a clean copy of /run
Apr 18 12:14:25 machine5 lxd.daemon[10189]: ==> Preparing a clean copy of /etc
Apr 18 12:14:25 machine5 lxd.daemon[10189]: ==> Setting up ceph configuration
Apr 18 12:14:25 machine5 lxd.daemon[10189]: ==> Setting up LVM configuration
Apr 18 12:14:25 machine5 lxd.daemon[10189]: ==> Rotating logs
Apr 18 12:14:25 machine5 lxd.daemon[10189]: ==> Setting up ZFS (0.7)
Apr 18 12:14:25 machine5 lxd.daemon[10189]: ==> Escaping the systemd cgroups
Apr 18 12:14:25 machine5 lxd.daemon[10189]: ====> Detected cgroup V1
Apr 18 12:14:25 machine5 lxd.daemon[10189]: ==> Escaping the systemd process resource limits
Apr 18 12:14:25 machine5 lxd.daemon[10189]: ==> Disabling shiftfs on this kernel (auto)
Apr 18 12:14:25 machine5 lxd.daemon[10189]: => Starting LXCFS
Apr 18 12:14:25 machine5 lxd.daemon[10189]: Running constructor lxcfs_init to reload liblxcfs
Apr 18 12:14:25 machine5 lxd.daemon[10189]: mount namespace: 4
Apr 18 12:14:25 machine5 lxd.daemon[10189]: hierarchies:
Apr 18 12:14:25 machine5 lxd.daemon[10189]: 0: fd: 5:
Apr 18 12:14:25 machine5 lxd.daemon[10189]: 1: fd: 6: name=systemd
Apr 18 12:14:25 machine5 lxd.daemon[10189]: 2: fd: 7: hugetlb
Apr 18 12:14:25 machine5 lxd.daemon[10189]: 3: fd: 8: freezer
Apr 18 12:14:25 machine5 lxd.daemon[10189]: 4: fd: 9: memory
Apr 18 12:14:25 machine5 lxd.daemon[10189]: 5: fd: 10: cpuset
Apr 18 12:14:25 machine5 lxd.daemon[10189]: 6: fd: 11: net_cls,net_prio
Apr 18 12:14:25 machine5 lxd.daemon[10189]: 7: fd: 12: rdma
Apr 18 12:14:25 machine5 lxd.daemon[10189]: 8: fd: 13: blkio
Apr 18 12:14:25 machine5 lxd.daemon[10189]: 9: fd: 14: cpu,cpuacct
Apr 18 12:14:25 machine5 lxd.daemon[10189]: 10: fd: 15: devices
Apr 18 12:14:25 machine5 lxd.daemon[10189]: 11: fd: 16: perf_event
Apr 18 12:14:25 machine5 lxd.daemon[10189]: 12: fd: 17: pids
Apr 18 12:14:25 machine5 lxd.daemon[10189]: api_extensions:
Apr 18 12:14:25 machine5 lxd.daemon[10189]: - cgroups
Apr 18 12:14:25 machine5 lxd.daemon[10189]: - sys_cpu_online
Apr 18 12:14:25 machine5 lxd.daemon[10189]: - proc_cpuinfo
Apr 18 12:14:25 machine5 lxd.daemon[10189]: - proc_diskstats
Apr 18 12:14:25 machine5 lxd.daemon[10189]: - proc_loadavg
Apr 18 12:14:25 machine5 lxd.daemon[10189]: - proc_meminfo
Apr 18 12:14:25 machine5 lxd.daemon[10189]: - proc_stat
Apr 18 12:14:25 machine5 lxd.daemon[10189]: - proc_swaps
Apr 18 12:14:25 machine5 lxd.daemon[10189]: - proc_uptime
Apr 18 12:14:25 machine5 lxd.daemon[10189]: - shared_pidns
Apr 18 12:14:25 machine5 lxd.daemon[10189]: - cpuview_daemon
Apr 18 12:14:25 machine5 lxd.daemon[10189]: - loadavg_daemon
Apr 18 12:14:25 machine5 lxd.daemon[10189]: - pidfds
I tried to add the node4 and node5 to cluster again, so I removed node4 and node5 from the cluster, then I run
sudo lxd init
in node4, however it says the ip cannot be bind. I exec the:
sudo systemctl reload snap.lxd.daemon
sudo snap restart lxd
Then run
sudo lxd init
I got
Then I tried the same thing in node5, It also said the ip can't be bind, thouth 8433 was not being used.
I removed the config of ips from node5 lxd, and add the node5 It still seemed that cannot access the cluster from node5 :cry:
I removed the config of ips from node5 lxd, and add the node5
Unsetting those configs will make the node unable to communicate, so it should never be done when using clustering.
We should probably throw an error in that case.
It would be also worth checking that the clocks on all your nodes are properly synchronized: that might explain why you observed some nodes to be listed as ONLINE
on one node and OFFLINE
on another.
If clocks are synchronized, my best guess is that maybe they weren't at that time.
Oh. Errr..., I removed the config of ips from node5 lxd, because I want to add it to the cluster again. If I didn't unset them, the lxd init
command would complain that it can't bind the ip of this node. So I unset them, and add node5 to the cluster again.
Oh. Errr..., I removed the config of ips from node5 lxd, because I want to add it to the cluster again. If I didn't unset them, the
lxd init
command would complain that it can't bind the ip of this node. So I unset them, and add node5 to the cluster again.
That's not going to work.
At this point run this (from another node, not node5):
lxc cluster remove --force node5
Then wipe completely node5 (e.g. snap remove lxd
) and start from scratch.
oh. I forgot to add the infomation. Before I run
lxd init
I've already remove the node from the cluster.
After I remove the node. It still cann't be added to the cluster becaouse of the ip can't be bound. Please look at this pic below, I've remove the node, so the 8433 port is not being used. It's a little weird a node cann't be added to the cluster again after remove it from the cluster. So I unset the ip config settings. I'll try to wipe it complete.
Thanks a lot for helping.
@freeekanayaka what's the status on this one? Anything that needs fixing in LXD or dqlite?
@stgraber I looked again at the code producing the original error (image not available on any online node
) and my reading is that's only possible if all the nodes having that images are being considered offline. Given that the various lxc cluster list
screenshots attached above indicate that all nodes are offline, the most likely cause is some clock mismatch between the nodes.
Without having further details and a more clear reproducer, it's hard to tell if the clock mismatch is indeed the cause or if there's a different bug.
@enderson-pan any update on this?
Base information
Detailed snap information
Detailed LXD information
Daemon configuration
Instances
Images
Storage pools
Networks
Projects
Profiles
Default profile
Cluster
Kernel log (last 50 lines)
Daemon log (last 50 lines)
Systemd log (last 50 lines)
Issue description
A brief description of the problem. Should include what you were attempting to do, what you did, what happened and what you expected to see happen.
I create a cluster using 5 machines, then I launch the instance using the command below one after another immediately if the last one finish.
After several times, maybe 3~4 times, I get the erro :
I've tried several times, the isuue is not self-recovery. Then I try the command in other node, it's the same as in node1.
Steps to reproduce
Information to attach
dmesg
)lxc info NAME --show-log
)lxc config show NAME --expanded
)lxc monitor
while reproducing the issue)lxc image info ubuntu-18.04