Open eazylaykzy opened 1 year ago
I think Ubuntu uses apt
for package management so it will not have any yum
related files or folders. I tested this with Rocky Linux which I generally run for servers. You could also try installing yum
for Ubuntu. Not sure how well it works though. I have no experience with Ubuntu myself.
Thank you for your prompt response, I sure will give that a try.
@tuupola Thank you for putting the Ceph role out here, I was able to get it to install on my Ubuntu VMs (5 Vagrant VMs), but I was unable to get it to work, it did not get pass the health check, that's because for some reason it couldn't start, I tested the configuration too on DigitalOcean's Droplet with 5 VMs, it was same roadblock.
I'm putting the configuration out here for anyone that stumble upon the Repo and needed to run it on their Ubuntu machine, and could probably make it work for them.
---
- hosts: all
become: true
name: Install CEPH binaries
tasks:
- name: Install CEPH and dependencies
ansible.builtin.apt:
name:
- cephadm
- ceph-common
state: present
- hosts: all
become: true
name: Initialise CEPH cluster
tasks:
- name: Check for existing config file
stat:
path: /etc/ceph/ceph.conf
register: ceph_conf
- name: Bootstrap the cluster
ansible.builtin.shell: cephadm bootstrap --mon-ip {{ ansible_host }} --ssh-user {{ docker_users[0] }} --skip-dashboard --skip-monitoring-stack --skip-firewalld
delegate_to: "{{ (groups[swarm_managers_inventory_group_name] | map('extract', hostvars, ['ansible_hostname']))[0] }}"
when: not ceph_conf.stat.exists and inventory_hostname == groups[swarm_managers_inventory_group_name][0] # only on the first manager
- name: Get public key contents
ansible.builtin.shell: cat /etc/ceph/ceph.pub
register: ceph_pub
delegate_to: "{{ (groups[swarm_managers_inventory_group_name] | map('extract', hostvars, ['ansible_hostname']))[0] }}"
run_once: true
- name: Distribute the public key to all hosts
ansible.builtin.lineinfile:
# path: /etc/ceph/ceph.pub
# path: /home/{{ docker_users[0] }}/.ssh/authorized_keys
path: /root/.ssh/authorized_keys # from https://geek-cookbook.funkypenguin.co.nz/docker-swarm/shared-storage-ceph/
create: true
state: present
line: "{{ ceph_pub.stdout }}"
- name: Get keyring contents
ansible.builtin.shell: cat /etc/ceph/ceph.client.admin.keyring
register: ceph_keyring
delegate_to: "{{ (groups[swarm_managers_inventory_group_name] | map('extract', hostvars, ['ansible_hostname']))[0] }}"
run_once: true
- name: Distribute the keyring to all hosts
ansible.builtin.shell: "echo '{{ ceph_keyring.stdout }}' > /etc/ceph/ceph.client.admin.keyring"
- name: Get conf contents
ansible.builtin.shell: cat /etc/ceph/ceph.conf
register: ceph_conf
delegate_to: "{{ (groups[swarm_managers_inventory_group_name] | map('extract', hostvars, ['ansible_hostname']))[0] }}"
run_once: true
- name: Distribute the 'conf' to all hosts
ansible.builtin.shell: "echo '{{ ceph_conf.stdout }}' > /etc/ceph/ceph.conf"
- name: Add other nodes to the cluster except the delegate
ansible.builtin.shell: ceph orch host add {{ ansible_hostname }} {{ ansible_host }} --labels _admin
delegate_to: "{{ (groups[swarm_managers_inventory_group_name] | map('extract', hostvars, ['ansible_hostname']))[0] }}"
when: inventory_hostname != (groups[swarm_managers_inventory_group_name] | map('extract', hostvars, ['ansible_hostname']))[0]
- name: Add all available disks to the cluster
ansible.builtin.shell: ceph orch apply osd --all-available-devices
delegate_to: "{{ (groups[swarm_managers_inventory_group_name] | map('extract', hostvars, ['ansible_hostname']))[0] }}"
run_once: true
- hosts: all
become: true
name: Create and mount a CEPH volume named "shared"
tasks:
- name: Create the volume
ansible.builtin.shell: ceph fs volume create shared
delegate_to: "{{ (groups[swarm_managers_inventory_group_name] | map('extract', hostvars, ['ansible_hostname']))[0] }}"
run_once: true
- name: Wait for the volume to become available
ansible.builtin.shell: ceph health
register: result
until: result.stdout == "HEALTH_OK"
retries: 30
delay: 10
- name: Mount the shared volume to /var/lib/docker/volumes
ansible.posix.mount:
src: admin@.shared=/
path: /var/lib/docker/volumes
opts: mon_addr={{ (groups[swarm_managers_inventory_group_name] | map('extract', hostvars, ['ansible_host']))[0] }}:6789,noatime,_netdev
state: mounted
fstype: ceph
This actually works, I just don't understand the internals of Ceph, locally I was using Vagrant, I noticed with this Ceph command ceph orch apply osd --all-available-devices
needed free disk space which I don't have with Vagrant, so I spin up 3 DigitalOcean's Droplet VMs with Volumes attached to each separate (unformatted and unpartitioned), this way Ceph was able to detect all three and added them, I got a healthy volume and was able to mount the volumes on all VMs.
It's a pretty nice setup you have, thanks for putting it out.
I'm trying to bootstrap the Ceph Ansible role for a VM setup running on Docker Swarm with 5 node (3 master nodes) on Ubuntu, but I'm getting this error, and I've tried to look it up, I have found no useful solution, I was thinking you might be able to help out.
Error log below:
Thanks.