ceph / ceph-ansible

Ansible playbooks to deploy Ceph, the distributed filesystem.
Apache License 2.0
1.68k stars 1.01k forks source link

Trying a PoC with dd not working. stderr: blkid: error: /dev/ro1: Invalid argument #7549

Closed vishwabhumi closed 5 months ago

vishwabhumi commented 5 months ago

Bug Report

What happened: I am trying to do a simple PoC and in the process, created 3 disk images using dd (each size is also 3 GB). I then used mkfs.ext4 to make a filesystem on them. I followed the documentation. For ceph-ansible (I am on stable-8.0) . My inventoy is basically:

[mons]
localhost anisble_connection=local

[monitoring]
localhost anisble_connection=local

[mgrs]
localhost ansible_connection=local

[osds]
localhost ansible_connection=local```

But I keep running into this error:
```sh
TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] ********
fatal: [localhost]: FAILED! => changed=true 
  cmd:
  - ceph-volume
  - --cluster
  - ceph
  - lvm
  - batch
  - --bluestore
  - /dev/ro3
  - /dev/ro1
  - /dev/ro2
  - --report
  - --format=json
  delta: '0:00:00.611818'
  end: '2024-04-18 10:48:31.052039'
  msg: non-zero return code
  rc: 1
  start: '2024-04-18 10:48:30.440221'
  stderr: |2-
     stderr: blkid: error: /dev/ro3: Invalid argument
     stderr: Unknown device "/dev/ro3": Inappropriate ioctl for device
     stderr: blkid: error: /dev/ro1: Invalid argument
     stderr: Unknown device "/dev/ro1": Inappropriate ioctl for device
     stderr: blkid: error: /dev/ro2: Invalid argument
     stderr: Unknown device "/dev/ro2": Inappropriate ioctl for device
    --> DEPRECATION NOTICE
    --> You are using the legacy automatic disk sorting behavior
    --> The Pacific release will change the default to --no-auto
    --> passed data devices: 0 physical, 3 LVM
    --> relative data size: 1.0
    Traceback (most recent call last):
      File "/usr/sbin/ceph-volume", line 33, in <module>
        sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')())
      File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 41, in __init__
        self.main(self.argv)
      File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
        return f(*a, **kw)
      File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 153, in main
        terminal.dispatch(self.mapper, subcommand_args)
      File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
        instance.main()
      File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py", line 46, in main
        terminal.dispatch(self.mapper, self.argv)
      File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
        instance.main()
      File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 16, in is_root
        return func(*a, **kw)
      File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/batch.py", line 401, in main
        plan = self.get_plan(self.args)
      File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/batch.py", line 437, in get_plan      
        plan = self.get_deployment_layout(args, args.devices, args.db_devices,
      File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/batch.py", line 456, in get_deployment_layout
        plan.extend(get_lvm_osds(lvm_devs, args))
      File "/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/batch.py", line 93, in get_lvm_osds   
        disk.Size(b=int(lv.lvs[0].lv_size)),
    IndexError: list index out of range
  stderr_lines: <omitted>
  stdout: '{}'
  stdout_lines: <omitted>```
Where /dev/ro1 is mounted using losetup.

What you expected to happen:

I expect ceph to be installed with graphana and Prometheus.

How to reproduce it (minimal and precise):
<!-- Please let us know any circumstances for reproduction of your bug. -->

Share your group_vars files, inventory and **full** ceph-ansibe log
all.yml
```sh
ceph_origin: repository
ceph_repository: community
ceph_repository_type: cdn
ceph_stable_release: reef

monitor_interface: ens33
public_network: 192.168.91.0/24```

osds.yml # I expect these to be used as the storage disks for Ceph
```sh
devices:
   - /dev/ro1
   - /dev/ro2
   - /dev/ro3

Environment: OS: 22.04.4 LTS (Jammy Jellyfish) ceph-ansible version: stable-8.0 ceph_stable_release: reef

badfiles commented 5 months ago

ceph won't work with loop-devices, use targetcli

vishwabhumi commented 5 months ago

Umm do we have a PoC all in one easy deployment guide?

github-actions[bot] commented 5 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

github-actions[bot] commented 5 months ago

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.