TASK [scan ceph-disk osds with ceph-volume if deploying nautilus] **************
Tuesday 06 October 2020 19:56:42 +0200 (0:00:00.098) 0:10:36.135 *******
fatal: [storage001-stg]: FAILED! => changed=true
cmd:
- ceph-volume
- --cluster=ceph
- simple
- scan
- --force
delta: '0:00:00.892080'
end: '2020-10-06 19:56:43.691758'
msg: non-zero return code
rc: 1
start: '2020-10-06 19:56:42.799678'
stderr: |2-
stderr: lsblk: /var/lib/ceph/osd/ceph-12: not a block device
stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected.
Running command: /sbin/cryptsetup status tmpfs
stderr: blkid: error: tmpfs: No such file or directory
stderr: lsblk: tmpfs: not a block device
Traceback (most recent call last):
File "/usr/sbin/ceph-volume", line 11, in <module>
load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 40, in __init__
self.main(self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/decorators.py", line 59, in newfunc
return f(*a, **kw)
File "/usr/lib/python3/dist-packages/ceph_volume/main.py", line 151, in main
terminal.dispatch(self.mapper, subcommand_args)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/simple/main.py", line 33, in main
terminal.dispatch(self.mapper, self.argv)
File "/usr/lib/python3/dist-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3/dist-packages/ceph_volume/devices/simple/scan.py", line 378, in main
device = Device(self.encryption_metadata['device'])
File "/usr/lib/python3/dist-packages/ceph_volume/util/device.py", line 92, in __init__
self._parse()
File "/usr/lib/python3/dist-packages/ceph_volume/util/device.py", line 138, in _parse
vgname, lvname = self.path.split('/')
ValueError: not enough values to unpack (expected 2, got 1)
stderr_lines: <omitted>
stdout: ''
stdout_lines: <omitted>
What you expected to happen:
I expected all OSDs to be updated to the latest Ceph Octopus release. Right now we have a mixed bag:
Bug Report
This appears very similar to https://github.com/ceph/ceph-ansible/issues/5362, however I have confirmed
--force
option is included.What happened:
What you expected to happen:
I expected all OSDs to be updated to the latest Ceph Octopus release. Right now we have a mixed bag:
How to reproduce it (minimal and precise):
Share your group_vars files, inventory and full ceph-ansible log
Variables (spread across multiple files, some environment specific)
Inventory
Log content
Environment: