openvstorage / framework

The Framework is a set of components and tools which brings the user an interface (GUI / API) to setup, extend and manage an Open vStorage platform.
Other
27 stars 23 forks source link

No available IP addresses found suitable for Storage Router storage IP #843

Closed JeffreyDevloo closed 7 years ago

JeffreyDevloo commented 8 years ago

Problem description

I was setting up a single node cluster with only one NIC (and so only one IP) without installing KVM and libvirt. The installation was successful but I received the following error when adding a vpool:

2016-08-23 14:09:44 00500 +0200 - ovs-node1 - 20554/140524164953920 - celery/celery.worker.job - 238 - ERROR - Task ovs.storagerouter.add_vpool[96a435c6-fe17-42da-8eeb-4c8252ef9460] raised unexpected: ValueError('Errors validating the partition roles:\n - No available IP addresses found suitable for Storage Router storage IP',)
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/celery/app/trace.py", line 438, in __protected_call__
    return self.run(*args, **kwargs)
  File "/opt/OpenvStorage/ovs/lib/storagerouter.py", line 539, in add_vpool
    raise ValueError('Errors validating the partition roles:\n - {0}'.format('\n - '.join(set(error_messages))))
ValueError: Errors validating the partition roles:
 - No available IP addresses found suitable for Storage Router storage IP

It turns out that installing my kvm and libvirt packages created a virtualbridge with an ip. Due to this added entry in my network I could add vpools without a problem, making me wonder why such a check even exists.

Possible root of the problem

In ovs/lib/storagerouter.py -- line 531 we remove the grid ip of the node as a check. The grid ip can however still be selected in the GUI...

        ipaddresses = metadata['ipaddresses']
        grid_ip = EtcdConfiguration.get('/ovs/framework/hosts/{0}/ip'.format(unique_id))
        if grid_ip in ipaddresses:
            ipaddresses.remove(grid_ip)
        if not ipaddresses:
            error_messages.append('No available IP addresses found suitable for Storage Router storage IP')

        if error_messages:
            raise ValueError('Errors validating the partition roles:\n - {0}'.format('\n - '.join(set(error_messages))))

Possible solution

Remove this check.

Additional information

Setup

Hyperconverged setup

khenderick commented 8 years ago

Most likely it's just the check that can be removed, but we should take a look in the surrounding code to see whether there are more leftovers.

This assuming that we do actually want customers to be able to use a single ip for both management and storage network. If we don't want that, we should make sure everything is checked in the wizards as well.

wimpers commented 8 years ago

@JeffreyDevloo please create a PR to remove the check. Do check surrounding code.

kvanhijf commented 7 years ago

https://github.com/openvstorage/framework/pull/893 openvstorage-2.7.3-rev.3949.f0378dc

JeffreyDevloo commented 7 years ago

Steps

root@ovs-node1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:4c:e7:47 brd ff:ff:ff:ff:ff:ff
    inet 10.100.199.151/16 brd 10.100.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe4c:e747/64 scope link 
       valid_lft forever preferred_lft forever

Output

2016-09-20 09:59:04 84900 +0200 - ovs-node1 - 28217/140572974147392 - celery/celery.worker.strategy - 154 - INFO - Received task: ovs.vpool.up_and_running[ed1c310d-cc11-4
b84-8517-9a60118e2c7a]
2016-09-20 09:59:04 85000 +0200 - ovs-node1 - 28217/140572974147392 - celery/celery.pool - 155 - DEBUG - TaskPool: Apply <function _fast_trace_task at 0x7fd9ad9f1140> (ar
gs:('ovs.vpool.up_and_running', 'ed1c310d-cc11-4b84-8517-9a60118e2c7a', [], {'storagedriver_id': u'vm-vpooleoWAq9OBsa657llX'}, {'utc': True, u'is_eager': False, 'chord': 
None, u'group': None, 'args': [], 'retries': 0, u'delivery_info': {u'priority': None, u'redelivered': False, u'routing_key': u'sr.eoWAq9OBsa657llX', u'exchange': u'generi
c'}, 'expires': None, u'hostname': 'celery@ovs-node1', 'task': 'ovs.vpool.up_and_running', 'callbacks': None, u'correlation_id': u'ed1c310d-cc11-4b84-8517-9a60118e2c7a', 
'errbacks': None, 'timelimit': (None, None), 'taskset': None, 'kwargs': {'storagedriver_id': u'vm-vpooleoWAq9OBsa657llX'}, 'eta': None, u'reply_to': u'9a5b6751-364b-3c38-
ace5-6383e18bbfe1', 'id': 'ed1c310d-cc11-4b84-8517-9a60118e2c7a', u'headers': {}}) kwargs:{})
2016-09-20 09:59:04 85100 +0200 - ovs-node1 - 28217/140572974147392 - celery/celery.worker.job - 156 - DEBUG - Task accepted: ovs.vpool.up_and_running[ed1c310d-cc11-4b84-
8517-9a60118e2c7a] pid:28270
2016-09-20 09:59:04 86000 +0200 - ovs-node1 - 28270/140572974147392 - celery/celery.redirected - 11 - WARNING - 2016-09-20 09:59:04 86000 +0200 - ovs-node1 - 28270/140572
974147392 - log/volumedriver_task - 10 - INFO - [ovs.lib.vpool.up_and_running] - [] - {"storagedriver_id": "vm-vpooleoWAq9OBsa657llX"} - {"storagedriver": "9d0c001e-5385-
4b3f-931a-84f68eabe5de"}
2016-09-20 09:59:04 87300 +0200 - ovs-node1 - 28217/140572974147392 - celery/celery.worker.job - 157 - INFO - Task ovs.vpool.up_and_running[ed1c310d-cc11-4b84-8517-9a60118e2c7a] succeeded in 0.022094079s: None

Test result

Did not bump into the error described in the ticket. Test passed.

Hyperconverged setup

Package information