Closed soakes closed 6 years ago
The raid level controls how the data is laid out locally on this node. This is a per node option and is set during node initialization through the config.json.
The HA (or replication) level controls how data is laid out globally within the cluster. This is a per volume option. This is set during volume create and can be changed later.
Are you looking to change the device raid level or the HA level of the volume?
Hi @prabirpaul thank you for explaining that.
I am trying to build a shared docker storage solution using two storage nodes plus one storage-less node for the quorum. However, it seems that when I drop one of the storage nodes, the whole cluster fails with:
Status: PX is not in quorum
License: PX-Developer
Node ID: d148adb6-fa00-47ff-bcab-0d15eb7e0c7c
IP: 172.30.1.220
Local Storage Pool: 0 pool
POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
No storage pool
Local Storage Devices: 0 device
Device Path Media Type Size Last-Scan
No storage device
total - 0 B
Cluster Summary
Cluster ID: ef4d3327-ad43-42f8-a419-f5b76fd95423
Cluster UUID: 886939b9-6904-492f-936f-ca5cf53aa959
Nodes: 2 node(s) with storage, 1 node(s) without storage
IP ID StorageNode Used Capacity Status
172.30.3.220 94332c3b-b8cb-4c35-849f-bf996136c764 Yes Unavailable Unavailable Not Available
172.30.2.220 49582994-4d64-4586-a614-577dcdce5f4d Yes Unavailable Unavailable Not Available
172.30.1.220 d148adb6-fa00-47ff-bcab-0d15eb7e0c7c No Unavailable Unavailable Not in Quorum (This node)
Global Storage Pool
Total Used : 0 B
Total Capacity : 0 B
I would of expected that quorum should of still been possible due to the storage-less node but alaias it isn't and so all nodes then goes offline.
Is there something I am miss understanding?
So in short I just want to have one extra copy of the data in case a node fails but the node still needs to be able to function while new disks etc could be installed.
I have tried several configurations but all seems to break when one of the two storage nodes are offline.
Because the storage-less nodes do not participate in quorum decisions, if you take down a storage node the cluster is out of quorum. You should be able to take own the storage-less node though. You would need a cluster size of 3 nodes to be able to do what you mentioned above.
Create volumes with HA level==2, which could create 1 extra copy for each volume distributed equally among all 3 nodes.
Thank you kindly. This does explain everything.
@prabirpaul One last quick question if I may, the 1TB limit, as I want to mirror the data for HA reasons, would that mean I am limited to about 333GB? or can I still have 1TB on each node and just leave it set to HA level==2? and the whole 1TB would be usable?
Sure. The size of the replicas does not count towards the volume size limit. You should be able to create volumes of 1T with HA factor of 1, 2 or 3.
Thank you @prabirpaul, this is great news.
Quick question as the documentation doesn't seem to shed any light on this. I am a little worried that it says raid0. The data is replicated to two nodes when the shared volume was created so surely this should say RAID1?
I just want to make sure that my data is safe. Is there any way to change this to RAID1?