canonical / lxd

Powerful system container and virtual machine manager
https://canonical.com/lxd
GNU Affero General Public License v3.0
4.37k stars 930 forks source link

Can't select ceph driver when opting for clustered lxd setup. #4750

Closed Spunge closed 6 years ago

Spunge commented 6 years ago

Required information

Distributor ID: Ubuntu Description: Ubuntu 18.04 LTS Release: 18.04 Codename: bionic

driver: lxc driver_version: 3.0.1 kernel: Linux kernel_architecture: x86_64 kernel_version: 4.15.0-23-generic server: lxd server_pid: 22361 server_version: "3.2" storage: "" storage_version: "" server_clustered: false

Issue description

I've been testing a setup with ceph / lxd for the last 2 days. But now, after rolling back to a snapshot and trying to snap install & init lxd for the [x] time, the ceph driver is gone?

Do you want to configure a new local storage pool? (yes/no) [default=yes]: yes
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]: ceph
Invalid input, try again.

Less than 2 hours ago i went through the exact same process and could select the ceph driver without any problems.

Steps to reproduce

  1. Install the lxd snap
  2. Init lxd
  3. Try to select ceph storage

I can't seem to find any info on https://snapcraft.io/lxd about when the package last changed.

stgraber commented 6 years ago

Sounds like you may have a mix of snap and deb. Try running

apt remove --purge lxd lxd-client
stgraber commented 6 years ago

I just tried it here using the stable channel for LXD and it offers me CEPH just fine.

Spunge commented 6 years ago

I purged the lxd lxd-client lxcfs & liblxc1 from the system before installing the snap and encountering the issue.

I will retry now, sec

Spunge commented 6 years ago
root@host-0001:~# apt-get purge lxd lxd-client lxcfs liblxc1
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be REMOVED:
  liblxc-common* liblxc1* lxcfs* lxd* lxd-client*
0 upgraded, 0 newly installed, 5 to remove and 0 not upgraded.
After this operation, 33.2 MB disk space will be freed.
Do you want to continue? [Y/n] 
(Reading database ... 112325 files and directories currently installed.)
Removing lxd (3.0.1-0ubuntu1~18.04.1) ...
Removing lxd dnsmasq configuration
Removing lxcfs (3.0.1-0ubuntu2~18.04.1) ...
Removing lxd-client (3.0.1-0ubuntu1~18.04.1) ...
Removing liblxc-common (3.0.1-0ubuntu1~18.04.1) ...
Removing liblxc1 (3.0.1-0ubuntu1~18.04.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for man-db (2.8.3-2) ...
(Reading database ... 112081 files and directories currently installed.)
Purging configuration files for liblxc-common (3.0.1-0ubuntu1~18.04.1) ...
Purging configuration files for lxd (3.0.1-0ubuntu1~18.04.1) ...
Purging configuration files for lxcfs (3.0.1-0ubuntu2~18.04.1) ...
Processing triggers for ureadahead (0.100.0-20) ...
Processing triggers for systemd (237-3ubuntu10) ...
root@host-0001:~# snap install lxd
lxd 3.2 from 'canonical' installed
root@host-0001:~# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=host-0001]: 
What IP address or DNS name should be used to reach this node? [default=192.168.0.8]: 10.0.0.8
Are you joining an existing cluster? (yes/no) [default=no]: no
Setup password authentication on the cluster? (yes/no) [default=yes]: no
Do you want to configure a new local storage pool? (yes/no) [default=yes]: yes
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]: ceph
Invalid input, try again.

root@host-0001:~# ps aux | grep lxd
root       42996  0.0  0.0   4504  1788 ?        Ss   15:38   0:00 /bin/sh /snap/lxd/7651/commands/daemon.start
root       43217  0.0  0.0  95384  1892 ?        Sl   15:38   0:00 lxcfs /var/snap/lxd/common/var/lib/lxcfs -p /var/snap/lxd/common/lxcfs.pid
root       43230  3.5  0.7 1396340 29844 ?       Sl   15:38   0:08 lxd --logfile /var/snap/lxd/common/lxd/logs/lxd.log --group lxd
root       45197  0.0  0.0  13136  1056 pts/1    S+   15:42   0:00 grep --color=auto lxd

root@host-0001:~# dpkg --list | grep lxd
root@host-0001:~# 

config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: |
    removed for brevity
  certificate_fingerprint: 05e45fceceb581d54affe453a87c1db1b0f2c652c8f71f4c8d5cac619a823a94
  driver: lxc
  driver_version: 3.0.1
  kernel: Linux
  kernel_architecture: x86_64
  kernel_version: 4.15.0-23-generic
  server: lxd
  server_pid: 43230
  server_version: "3.2"
  storage: ""
  storage_version: ""
  server_clustered: false
  server_name: host-0001

Same result, this started around 6 hours ago. Running the exact same commands on the exact same snapshot worked before that.

stgraber commented 6 years ago

Deployed a clean Ubuntu 18.04 system here, then:

root@djanet:~# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]: 

Can you post the output of lxd.buginfo and snap get lxd?

Spunge commented 6 years ago

Before reading this, please skip to the end.

Snap get lxd

root@host-0001:~# snap get lxd
error: snap "lxd" has no configuration

Base information

Detailed snap information

name:      lxd
summary:   System container manager and API
publisher: canonical
contact:   https://github.com/lxc/lxd/issues
license:   unknown
description: |
  LXD is a container manager for system containers.

  It offers a REST API to remotely manage containers over the network, using
  an image based workflow and with support for live migration.

  Images are available for all Ubuntu releases and architectures as well as
  for a wide number of other Linux distributions.

  LXD containers are lightweight, secure by default and a great alternative
  to virtual machines.
commands:
  - lxd.benchmark
  - lxd.buginfo
  - lxd.check-kernel
  - lxd.lxc
  - lxd
  - lxd.migrate
services:
  lxd.daemon: simple, enabled, active
snap-id:      J60k4JY0HppjwOjW8dZdYc8obXKxujRu
tracking:     stable
refresh-date: today at 07:50 UTC
channels:                                
  stable:        3.2         (7651) 57MB -
  candidate:     3.2         (7651) 57MB -
  beta:          ↑                       
  edge:          git-47bb242 (7683) 57MB -
  2.0/stable:    2.0.11      (7503) 28MB -
  2.0/candidate: 2.0.11      (7503) 28MB -
  2.0/beta:      ↑                       
  2.0/edge:      git-34271f2 (7673) 26MB -
  3.0/stable:    3.0.1       (7650) 56MB -
  3.0/candidate: 3.0.1       (7650) 56MB -
  3.0/beta:      ↑                       
  3.0/edge:      git-093125c (7621) 57MB -
installed:       3.2         (7651) 57MB -

Detailed LXD information

Daemon configuration

config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: |
     removed for brevity
  certificate_fingerprint: 15d931d633c450e4ef6fbf7e33f09e73c24e3a7471378aba24d023638f793c4c
  driver: lxc
  driver_version: 3.0.1
  kernel: Linux
  kernel_architecture: x86_64
  kernel_version: 4.15.0-23-generic
  server: lxd
  server_pid: 7077
  server_version: "3.2"
  storage: ""
  storage_version: ""
  server_clustered: false
  server_name: host-0001

Containers

+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

Images

+-------+-------------+--------+-------------+------+------+-------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+-------+-------------+--------+-------------+------+------+-------------+

Storage pools

+------+-------------+--------+--------+---------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
+------+-------------+--------+--------+---------+

Networks

+------+----------+---------+-------------+---------+
| NAME |   TYPE   | MANAGED | DESCRIPTION | USED BY |
+------+----------+---------+-------------+---------+
| ens3 | physical | NO      |             | 0       |
+------+----------+---------+-------------+---------+
| ens8 | physical | NO      |             | 0       |
+------+----------+---------+-------------+---------+

Default profile

config: {}
description: Default LXD profile
devices: {}
name: default
used_by: []

Kernel log (last 50 lines)

[    7.419467] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
[    7.433338] systemd[1]: Created slice System Slice.
[    7.435282] systemd[1]: Listening on udev Control Socket.
[    7.437822] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
[    7.440634] systemd[1]: Listening on Journal Audit Socket.
[    7.500847] Loading iSCSI transport class v2.0-870.
[    7.625150] iscsi: registered transport (tcp)
[    7.715478] systemd-journald[923]: Received request to flush runtime journal from PID 1
[    7.932020] iscsi: registered transport (iser)
[    9.099858] MCE: In-kernel MCE decoding enabled.
[   10.017354] snd_hda_codec_generic hdaudioC0D0: autoconfig for Generic: line_outs=1 (0x3/0x0/0x0/0x0/0x0) type:line
[   10.017358] snd_hda_codec_generic hdaudioC0D0:    speaker_outs=0 (0x0/0x0/0x0/0x0/0x0)
[   10.017361] snd_hda_codec_generic hdaudioC0D0:    hp_outs=0 (0x0/0x0/0x0/0x0/0x0)
[   10.017363] snd_hda_codec_generic hdaudioC0D0:    mono: mono_out=0x0
[   10.017365] snd_hda_codec_generic hdaudioC0D0:    inputs:
[   10.017367] snd_hda_codec_generic hdaudioC0D0:      Line=0x5
[   12.278105] audit: type=1400 audit(1531122583.944:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/snap/core/4917/usr/lib/snapd/snap-confine" pid=1766 comm="apparmor_parser"
[   12.278110] audit: type=1400 audit(1531122583.944:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/snap/core/4917/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=1766 comm="apparmor_parser"
[   12.281233] audit: type=1400 audit(1531122583.944:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine" pid=1769 comm="apparmor_parser"
[   12.281238] audit: type=1400 audit(1531122583.944:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=1769 comm="apparmor_parser"
[   12.283190] audit: type=1400 audit(1531122583.944:6): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/lxc-start" pid=1767 comm="apparmor_parser"
[   12.289141] audit: type=1400 audit(1531122583.952:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/man" pid=1768 comm="apparmor_parser"
[   12.289146] audit: type=1400 audit(1531122583.952:8): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_filter" pid=1768 comm="apparmor_parser"
[   12.289149] audit: type=1400 audit(1531122583.952:9): apparmor="STATUS" operation="profile_load" profile="unconfined" name="man_groff" pid=1768 comm="apparmor_parser"
[   12.310061] audit: type=1400 audit(1531122583.972:10): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default" pid=1764 comm="apparmor_parser"
[   12.310066] audit: type=1400 audit(1531122583.972:11): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-cgns" pid=1764 comm="apparmor_parser"
[   13.492095] 8139cp 0000:00:03.0 ens3: link up, 100Mbps, full-duplex, lpa 0x05E1
[   15.394660] new mount options do not match the existing superblock, will be ignored
[   17.661041] systemd-journald[923]: Failed to set ACL on /var/log/journal/d2bf7c5f53774bedb39fc5850867c656/user-1000.journal, ignoring: Operation not supported
[   43.880414] systemd-journald[923]: Failed to set ACL on /var/log/journal/d2bf7c5f53774bedb39fc5850867c656/user-1000.journal, ignoring: Operation not supported
[   64.823204] kauditd_printk_skb: 9 callbacks suppressed
[   64.823205] audit: type=1400 audit(1531122635.250:21): apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap-update-ns.lxd" pid=6294 comm="apparmor_parser"
[   64.928876] audit: type=1400 audit(1531122635.358:22): apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.benchmark" pid=6296 comm="apparmor_parser"
[   65.031749] audit: type=1400 audit(1531122635.458:23): apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.buginfo" pid=6298 comm="apparmor_parser"
[   65.135369] audit: type=1400 audit(1531122635.562:24): apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.check-kernel" pid=6300 comm="apparmor_parser"
[   65.236617] audit: type=1400 audit(1531122635.666:25): apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.daemon" pid=6302 comm="apparmor_parser"
[   65.316983] audit: type=1400 audit(1531122635.746:26): apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.hook.configure" pid=6304 comm="apparmor_parser"
[   65.402353] audit: type=1400 audit(1531122635.830:27): apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.lxc" pid=6306 comm="apparmor_parser"
[   65.494000] audit: type=1400 audit(1531122635.922:28): apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.lxd" pid=6308 comm="apparmor_parser"
[   65.583063] audit: type=1400 audit(1531122636.010:29): apparmor="STATUS" operation="profile_load" profile="unconfined" name="snap.lxd.migrate" pid=6310 comm="apparmor_parser"
[   66.663339] audit: type=1400 audit(1531122637.090:30): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/snap/core/4917/usr/lib/snapd/snap-confine" pid=6392 comm="apparmor_parser"
[   69.826040] kauditd_printk_skb: 31 callbacks suppressed
[   69.826042] audit: type=1400 audit(1531122640.254:62): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.buginfo" pid=6692 comm="apparmor_parser"
[   69.929267] audit: type=1400 audit(1531122640.358:63): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.check-kernel" pid=6694 comm="apparmor_parser"
[   70.029764] audit: type=1400 audit(1531122640.458:64): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.daemon" pid=6696 comm="apparmor_parser"
[   70.040667] audit: type=1400 audit(1531122640.470:65): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="snap.lxd.hook.configure" pid=6698 comm="apparmor_parser"
[   70.144718] audit: type=1400 audit(1531122640.574:66): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.lxc" pid=6700 comm="apparmor_parser"
[   70.242900] audit: type=1400 audit(1531122640.670:67): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.lxd" pid=6704 comm="apparmor_parser"
[   70.334004] audit: type=1400 audit(1531122640.762:68): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="snap.lxd.migrate" pid=6706 comm="apparmor_parser"
[   73.102051] new mount options do not match the existing superblock, will be ignored

Daemon log (last 50 lines)

lvl=info msg="LXD 3.2 is starting in normal mode" path=/var/snap/lxd/common/lxd t=2018-07-09T07:50:44+0000
lvl=info msg="Kernel uid/gid map:" t=2018-07-09T07:50:44+0000
lvl=info msg=" - u 0 0 4294967295" t=2018-07-09T07:50:44+0000
lvl=info msg=" - g 0 0 4294967295" t=2018-07-09T07:50:44+0000
lvl=info msg="Configured LXD uid/gid map:" t=2018-07-09T07:50:44+0000
lvl=info msg=" - u 0 1000000 1000000000" t=2018-07-09T07:50:44+0000
lvl=info msg=" - g 0 1000000 1000000000" t=2018-07-09T07:50:44+0000
lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2018-07-09T07:50:44+0000
lvl=info msg="Initializing local database" t=2018-07-09T07:50:44+0000
lvl=info msg="Initializing database gateway" t=2018-07-09T07:50:47+0000
address= id=1 lvl=info msg="Start database node" t=2018-07-09T07:50:47+0000
lvl=info msg="Raft: Initial configuration (index=1): [{Suffrage:Voter ID:1 Address:0}]" t=2018-07-09T07:50:47+0000
lvl=info msg="Raft: Node at 0 [Leader] entering Leader state" t=2018-07-09T07:50:47+0000
lvl=info msg="LXD isn't socket activated" t=2018-07-09T07:50:47+0000
lvl=info msg="Starting /dev/lxd handler:" t=2018-07-09T07:50:47+0000
lvl=info msg=" - binding devlxd socket" socket=/var/snap/lxd/common/lxd/devlxd/sock t=2018-07-09T07:50:47+0000
lvl=info msg="REST API daemon:" t=2018-07-09T07:50:47+0000
lvl=info msg=" - binding Unix socket" socket=/var/snap/lxd/common/lxd/unix.socket t=2018-07-09T07:50:47+0000
lvl=info msg="Initializing global database" t=2018-07-09T07:50:47+0000
lvl=info msg="Initializing storage pools" t=2018-07-09T07:50:47+0000
lvl=info msg="Initializing networks" t=2018-07-09T07:50:47+0000
lvl=info msg="Loading configuration" t=2018-07-09T07:50:47+0000
lvl=info msg="Connected to MAAS controller" t=2018-07-09T07:50:47+0000
lvl=info msg="Pruning expired images" t=2018-07-09T07:50:47+0000
lvl=info msg="Done pruning expired images" t=2018-07-09T07:50:47+0000
lvl=info msg="Updating instance types" t=2018-07-09T07:50:47+0000
lvl=info msg="Expiring log files" t=2018-07-09T07:50:47+0000
lvl=info msg="Updating images" t=2018-07-09T07:50:47+0000
lvl=info msg="Done expiring log files" t=2018-07-09T07:50:47+0000
lvl=info msg="Done updating images" t=2018-07-09T07:50:47+0000
lvl=info msg="Done updating instance types" t=2018-07-09T07:50:50+0000

Systemd log (last 50 lines)

-- Logs begin at Thu 2018-07-05 07:48:03 UTC, end at Mon 2018-07-09 08:04:59 UTC. --
Jul 09 07:50:41 host-0001 systemd[1]: Started Service for snap application lxd.daemon.
Jul 09 07:50:41 host-0001 lxd.daemon[6757]: => Preparing the system
Jul 09 07:50:41 host-0001 lxd.daemon[6757]: ==> Creating missing snap configuration
Jul 09 07:50:42 host-0001 lxd.daemon[6757]: ==> Loading snap configuration
Jul 09 07:50:42 host-0001 lxd.daemon[6757]: ==> Setting up mntns symlink
Jul 09 07:50:42 host-0001 lxd.daemon[6757]: ==> Setting up kmod wrapper
Jul 09 07:50:42 host-0001 lxd.daemon[6757]: ==> Preparing /boot
Jul 09 07:50:42 host-0001 lxd.daemon[6757]: ==> Preparing a clean copy of /run
Jul 09 07:50:42 host-0001 lxd.daemon[6757]: ==> Preparing a clean copy of /etc
Jul 09 07:50:43 host-0001 lxd.daemon[6757]: ==> Setting up bash completion
Jul 09 07:50:43 host-0001 lxd.daemon[6757]: ==> Setting up ceph configuration
Jul 09 07:50:43 host-0001 lxd.daemon[6757]: ==> Setting up LVM configuration
Jul 09 07:50:43 host-0001 lxd.daemon[6757]: ==> Rotating logs
Jul 09 07:50:43 host-0001 lxd.daemon[6757]: ==> Setting up ZFS (0.7)
Jul 09 07:50:43 host-0001 lxd.daemon[6757]: ==> Escaping the systemd cgroups
Jul 09 07:50:43 host-0001 lxd.daemon[6757]: ==> Escaping the systemd process resource limits
Jul 09 07:50:43 host-0001 lxd.daemon[6757]: => Starting LXCFS
Jul 09 07:50:43 host-0001 lxd.daemon[6757]: => Starting LXD
Jul 09 07:50:43 host-0001 lxd.daemon[6757]: mount namespace: 5
Jul 09 07:50:43 host-0001 lxd.daemon[6757]: hierarchies:
Jul 09 07:50:43 host-0001 lxd.daemon[6757]:   0: fd:   6: cpuset
Jul 09 07:50:43 host-0001 lxd.daemon[6757]:   1: fd:   7: perf_event
Jul 09 07:50:43 host-0001 lxd.daemon[6757]:   2: fd:   8: hugetlb
Jul 09 07:50:43 host-0001 lxd.daemon[6757]:   3: fd:   9: pids
Jul 09 07:50:43 host-0001 lxd.daemon[6757]:   4: fd:  10: devices
Jul 09 07:50:43 host-0001 lxd.daemon[6757]:   5: fd:  11: rdma
Jul 09 07:50:43 host-0001 lxd.daemon[6757]:   6: fd:  12: cpu,cpuacct
Jul 09 07:50:43 host-0001 lxd.daemon[6757]:   7: fd:  13: net_cls,net_prio
Jul 09 07:50:43 host-0001 lxd.daemon[6757]:   8: fd:  14: blkio
Jul 09 07:50:43 host-0001 lxd.daemon[6757]:   9: fd:  15: memory
Jul 09 07:50:43 host-0001 lxd.daemon[6757]:  10: fd:  16: freezer
Jul 09 07:50:43 host-0001 lxd.daemon[6757]:  11: fd:  17: name=systemd
Jul 09 07:50:43 host-0001 lxd.daemon[6757]:  12: fd:  18: unified
Jul 09 07:50:44 host-0001 lxd.daemon[6757]: lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." t=2018-07-09T07:50:44+0000

Look at this though

root@alpha-0001:~# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: no
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: 
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]: ^C
root@host-0001:~# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=host-0001]: 
What IP address or DNS name should be used to reach this node? [default=192.168.0.8]: 10.0.0.8
Are you joining an existing cluster? (yes/no) [default=no]: no
Setup password authentication on the cluster? (yes/no) [default=yes]: no 
Do you want to configure a new local storage pool? (yes/no) [default=yes]: yes
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]:     

Problem seems to only occur when opting for clustering.

Spunge commented 6 years ago

I just set up a ubuntu 18.04 VM to make sure this was a problem with LXD, got the same result on that box:

root@testbox:~# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: no
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: 
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]: ^C
root@testbox:~# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: yes
What name should be used to identify this node in the cluster? [default=testbox]: 
What IP address or DNS name should be used to reach this node? [default=192.168.178.115]: 
Are you joining an existing cluster? (yes/no) [default=no]: no
Setup password authentication on the cluster? (yes/no) [default=yes]: no
Do you want to configure a new local storage pool? (yes/no) [default=yes]: yes
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]: ^C
root@testbox:~# 
stgraber commented 6 years ago

Ah, then it's perfectly normal.

Look at the question you're being asked, it says "local storage pool". CEPH isn't a local storage backend. The question after this one will be "remote storage pool" which if you answer yes to it will only allow CEPH as a storage backend.

Spunge commented 6 years ago

Hahaha, wow, i feel so stupid now.