Closed openvstorage-ci closed 7 years ago
From @khenderick on July 1, 2016 12:32
Extra information: The requirement for a disk as being usable as a plain disk (to be used as a cache disk, db disk, ...) is completely different from the requirements for it to be used as a backend disk.
From @khenderick on July 1, 2016 12:32
Current requirement for a disk to be available as backend disk:
Should be listed in /dev/disk/by-id/
, starting with scsi-
, ata-
or virtio-
and should be a symlink pointing to /dev/sd*
or /dev/vd*
.
Can you please add some information about the disks to this ticket (e.g. as what are they listed under /dev/disk/by-id/
), so we can add support for these nvme drives?
From @khenderick on July 1, 2016 12:32
From @pploegaert on June 27, 2016 11:38
Not present in /dev/disk/by-id/
Present in /dev/ as nvm*
root@allflash184:/dev# ls -la nvm*
crw------- 1 root root 247, 0 Jun 6 12:48 nvme0
brw-rw---- 1 root disk 259, 0 Jun 24 16:53 nvme0n1
brw-rw---- 1 root disk 259, 1 Jun 24 17:15 nvme0n1p1
Partitioned but no filesystem present yet:
root@allflash184:/dev# parted /dev/nvme0n1
GNU Parted 2.3
Using /dev/nvme0n1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: Unknown (unknown)
Disk /dev/nvme0n1: 1601GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 1601GB 1601GB primary
From @khenderick on July 1, 2016 12:32
A quick fix that might be possible (instead of rewriting how disks are identified) is adding a symlink, e.g. /dev/disk/by-id/nvme-something -> ../../nvme0n1
and /dev/disk/by-id/nvm-something-part1 -> ../../nvme0n1p1
, and then making a small patch in the code to accept this (that should be quite easy).
The only downside of this temporarily fix is that you have to add the symlinks beforehand, but for the POC and awaiting a more thorough fix this could be reasonable.
From @kinvaris on August 16, 2016 14:51
FAILED: This test failed, even when creating the symlinks. I was not able to add the NVMe to the backend.
ii openvstorage 2.7.2-rev.3867.ec9d46d-1 amd64 openvStorage
ii openvstorage-backend 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin
ii openvstorage-backend-core 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin core
ii openvstorage-backend-webapps 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin Web Applications
ii openvstorage-cinder-plugin 1.2.2-rev.32.948a8c1-1 amd64 OpenvStorage Cinder plugin for OpenStack
ii openvstorage-core 2.7.2-rev.3867.ec9d46d-1 amd64 openvStorage core
ii openvstorage-hc 1.7.2-rev.675.37ca5b8-1 amd64 openvStorage Backend plugin HyperConverged
ii openvstorage-sdm 1.6.2-rev.330.f06c8de-1 amd64 Open vStorage Backend ASD Manager
ii openvstorage-webapps 2.7.2-rev.3867.ec9d46d-1 amd64 openvStorage Web Applications
This should be resolved by implementing the same disk-management logic from openvstorage/framework#792 here.
Logs:
2016-10-20 11:57:26 09000 +0200 - ovs-esxi-host2 - 15734/140277941450560 - celery/celery.worker.job - 83 - DEBUG - Task accepted: albanode.initialize_disk[717eb450-817b-4e1e-8025-811f5fd05277] pid:15759
2016-10-20 11:57:26 16000 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/albanode - 36 - DEBUG - Initializing disk /dev/nvme0n1 at node 172.19.10.42
2016-10-20 11:57:32 06700 +0200 - ovs-esxi-host2 - 15759/140277941450560 - celery/celery.redirected - 38 - WARNING - 2016-10-20 11:57:32 06600 +0200 - ovs-esxi-host2 - 15759/140277941450560 - extensions/asdmanagerclient - 37 - INFO
- Request "add_disk" took 5.90 seconds (internal duration 5.89 seconds)
2016-10-20 11:57:32 30000 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/scheduled tasks - 39 - INFO - Ensure single CHAINED mode - ID 1476957452_ISV6KnStS2 - Amount of jobs pending for key ovs_ensure_single_ovs.disk.sync_wit
h_reality: 0
2016-10-20 11:57:32 30400 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/scheduled tasks - 40 - INFO - Ensure single CHAINED mode - ID 1476957452_ISV6KnStS2 - New task ovs.disk.sync_with_reality with params {'storagerouter_gu
id': u'd1bc0dee-8513-4749-ac47-f2c575dbe24e'} scheduled for execution
2016-10-20 11:57:32 30600 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/scheduled tasks - 41 - INFO - Ensure single CHAINED mode - ID 1476957452_ISV6KnStS2 - Amount of jobs pending for key ovs_ensure_single_ovs.disk.sync_wit
h_reality: 1
2016-10-20 11:57:32 30600 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/scheduled tasks - 42 - INFO - Ensure single CHAINED mode - ID 1476957452_ISV6KnStS2 - KWARGS: {'storagerouter_guid': u'd1bc0dee-8513-4749-ac47-f2c575d
be24e'}
2016-10-20 11:57:35 74500 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 43 - INFO - Investigating device /dev/sda
2016-10-20 11:57:35 81600 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 44 - INFO - Investigating partition /dev/sda1
2016-10-20 11:57:35 98300 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 45 - INFO - Investigating partition /dev/sda2
2016-10-20 11:57:36 12600 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 46 - INFO - Investigating partition /dev/sda3
2016-10-20 11:57:36 29300 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 47 - INFO - Investigating partition /dev/sda4
2016-10-20 11:57:36 48500 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 48 - INFO - Investigating device /dev/sdb
2016-10-20 11:57:36 55800 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 49 - INFO - Investigating partition /dev/sdb1
2016-10-20 11:57:36 72500 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 50 - INFO - Investigating device /dev/sdc
2016-10-20 11:57:36 82200 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 51 - INFO - Investigating device /dev/sdd
2016-10-20 11:57:36 89300 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 52 - INFO - Investigating device /dev/nvme0n1
2016-10-20 11:57:36 96400 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 53 - INFO - Investigating partition /dev/nvme0n1p1
2016-10-20 11:57:37 06900 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 54 - INFO - Disk sdc - Found, updating
2016-10-20 11:57:37 08000 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 55 - INFO - Disk sdd - Found, updating
2016-10-20 11:57:37 08900 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 56 - INFO - Disk sdb - Found, updating
2016-10-20 11:57:37 10700 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 57 - INFO - Disk sda - Found, updating
2016-10-20 11:57:37 14900 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 58 - INFO - Disk nvme0n1 - Found, updating
2016-10-20 11:57:37 15900 +0200 - ovs-esxi-host2 - 15759/140277941450560 - lib/disk - 59 - INFO - Disk nvme0n1 - Creating partition - {'filesystem': u'xfs', 'state': 'OK', 'offset': 2097152, 'mountpoint': u'/mnt/alba-asd/hDIIfjCFwmbz1lTe', 'size': 400086269952, 'aliases': ['/dev/disk/by-partlabel/nvme0n1']}
GUI:
NVME was initialized and claimed as aslba disk. Test passed.
From @khenderick on July 1, 2016 12:32
From @kinvaris on June 21, 2016 12:14
I have a NVMe on my environment that I want to claim as alba backend disk but I can't. Although this is not possible on the
backend
page, it is possible to claim it for a role.Copied from original issue: openvstorage/framework#644
Copied from original issue: openvstorage/framework-alba-plugin#155