openvstorage / framework-alba-plugin

The Framework ALBA plugin extends the OpenvStorage GUI with functionality to manage ASDs (Alternate Storage Daemon) and Seagate Kinetic drives.
Other
2 stars 3 forks source link

Not all alba asds are claimed as backend_disks and can be used for other roles #169

Closed kinvaris closed 7 years ago

kinvaris commented 8 years ago

We've observed this on a environment: http://i63.tinypic.com/or07id.png

It seems like not all alba disks are "seen" as backend disks. This means we could change/add roles to the backend disks. This should not be possible.

khenderick commented 8 years ago

The cause of this issue is the fact that the Backend disks are handled differently from the StorageRouter disks. They are discovered differently, and their path can be different. E.g. a disk in the Backend cannot be identified by it's wwn, but for the StorageRouter disks, it can. And this path is the identifier with which the Backend role is linked.

khenderick commented 8 years ago

In an ideal world, we would use wwns all the time, but it isn't available on vmware envs.

khenderick commented 8 years ago

@wimpers, can we re-validate what disks can be used for backends and/or roles? I'd like to get both disk related code more in line so this ticket can be resolved.

Currently:

khenderick commented 8 years ago

After discussions with @wimpers, we want to allow basically every block device that can be found, it's up to the customer to decide whether a disk would be a good choice or not. This means we can bring the implementations more in line and thus in theory have the same identifier for these code paths, which in turn should solve the reported issue.

pploegaert commented 8 years ago

@khenderick: we now install also ovs on kvm/qemu, so if possible pls include vd* too ...

khenderick commented 8 years ago

Removed question label, apparently I already posted the answer of the discussion.

JeffreyDevloo commented 8 years ago

Perhaps we shouldn't use the diskname as an identifier as these names aren't persistent.

khenderick commented 7 years ago

Depending on the implementation of openvstorage/framework#792 and the implementation of the same logic in the openvstorage/asd-manager repo. This will result in both sides having the same code, which in turn should make this ticket obsolete.

kvanhijf commented 7 years ago

Most likely fixed by https://github.com/openvstorage/framework/issues/792

JeffreyDevloo commented 7 years ago

Steps

Role test

In [41]: for p in DiskPartitionList.get_partitions():
    if len(p.roles) > 0: print '{0} {1}'.format(p.roles, p.disk.name)
   ....:     
[u'BACKEND'] sdc
[u'DB'] sdd
[u'BACKEND'] sda
[u'WRITE'] sdf
[u'DB'] sdd
[u'SCRUB'] sde
[u'BACKEND'] sda
[u'BACKEND'] sdb
[u'WRITE'] sdf
[u'DB'] sdd
[u'BACKEND'] sdb
[u'WRITE'] sdf
[u'BACKEND'] sdc
[u'SCRUB'] sde
[u'BACKEND'] sdc
[u'BACKEND'] sdb
[u'SCRUB'] sde
[u'WRITE'] sda

Found backend roles.

Test result

Test passed as the ticket handled over backend roles. The call has however been blocked on GUI but can still happen via API. This is another case though.

Packages

Additional test which is related to the GUI test but not relevant with the ticket

Code:

# Copyright (C) 2016 iNuron NV
#
# This file is part of Open vStorage Open Source Edition (OSE),
# as available from
#
#      http://www.openvstorage.org and
#      http://www.openvstorage.com.
#
# This file is free software; you can redistribute it and/or modify it
# under the terms of the GNU Affero General Public License v3 (GNU AGPLv3)
# as published by the Free Software Foundation, in version 3 as it comes
# in the LICENSE.txt file of the Open vStorage OSE distribution.
#
# Open vStorage is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY of any kind.
from ci.helpers.api import OVSClient
from ci.helpers.roles import RoleHelper
from ci.helpers.storagerouter import StoragerouterHelper

class ConfigureTest(object):

    @staticmethod
    def test():
        # Current configuration uses sda, sdb and sdc for backend
        ip = '10.100.199.151'
        roles = ['WRITE']
        diskname = 'sda'
        api = OVSClient(ip, 'admin', 'admin')
        ConfigureTest.add_disk_role(ip, diskname, roles, api)
        # Expecting an error to occur
        print 'Test failed'

    @staticmethod
    def add_disk_role(ip, diskname, roles, api, min_size=2):
        """
        Partition and adds roles to a disk

        :param ip: storagerouter ip where the disk is located
        :type ip: str
        :param diskname: shortname of a disk (e.g. sdb)
        :type diskname: str
        :param roles: list of roles you want to add to the disk
        :type roles: list
        :param api: specify a valid api connection to the setup
        :type api: ci.helpers.api.OVSClient
        :param min_size: minimum total_partition_size that is required to allocate the disk role
        :type min_size: int
        :param config: configuration file
        :type config: dict
        :return:
        """

        # Fetch information
        storagerouter_guid = StoragerouterHelper.get_storagerouter_guid_by_ip(ip)
        disk = StoragerouterHelper.get_disk_by_ip(ip, diskname)
        # Check if there are any partitions on the disk, if so check if there is enough space
        unused_partitions = []
        if len(disk.partitions) > 0:
            total_partition_size = 0
            for partition in disk.partitions:
                total_partition_size += partition.size
                # Check if the partition is in use - could possibly write role on unused partition
                # if partition.mountpoint is None:
                    # Means no output -> partition not mounted
                    # @Todo support partitions that are not sequentional
                unused_partitions.append(partition)

            # Elect biggest unused partition as potential candidate
            biggest_unused_partition = None
            if len(unused_partitions) > 0:
                # Sort the list based on size
                unused_partitions.sort(key=lambda x: x.size, reverse=True)
                biggest_unused_partition = unused_partitions[0]
            if ((disk.size-total_partition_size)/1024**3) > min_size:
                # disk is still large enough, let the partitioning begin and apply some roles!
                RoleHelper._configure_disk(storagerouter_guid=storagerouter_guid, disk_guid=disk.guid, offset=total_partition_size+1,
                                          size=(disk.size-total_partition_size)-1, roles=roles, api=api)
            elif biggest_unused_partition is not None and (biggest_unused_partition.size/1024**3) > min_size:
                RoleHelper._configure_disk(storagerouter_guid=storagerouter_guid, disk_guid=disk.guid, offset=biggest_unused_partition.offset,
                                          size=biggest_unused_partition.size, roles=roles, api=api, partition_guid=biggest_unused_partition.guid)
            else:
                # disk is too small
                raise RuntimeError("Disk `{0}` on node `{1}` is too small for role(s) `{2}`, min. total_partition_size is `{3}`"
                                   .format(diskname, ip, roles, min_size))
        else:
            # there are no partitions on the disk, go nuke it!
            RoleHelper._configure_disk(storagerouter_guid, disk.guid, 0, disk.size, roles, api)

if __name__ == "__main__":
    ConfigureTest.test()

Output:

2016-11-21 13:39:25 41200 +0100 - ovs-node-1 - 25015/140532161955584 - setup/ci_role_helper - 0 - INFO - Adjusting disk `3984a040-7130-49eb-a80b-468b9a25b7c5` should have succeeded on storagerouter `32783fe3-b2df-471a-9db5-41419e07efa1`
Test failed