Open lovilak opened 6 years ago
Hi @lovilak, there is code in the drac ansible module to perform the conversion of disks from non-RAID to RAID mode if necessary. I wonder why this is not being executed.
Are these issues that you have time to investigate and/or fix?
No, don't where to look ..
The code for the drac module is here.
This is where it decides which disks to convert: https://github.com/stackhpc/drac/blob/master/library/drac.py#L551.
The actual conversion is done here: https://github.com/stackhpc/drac/blob/master/library/drac.py#L1161.
Could You show me where to look for the raid5 size problem ?
I think this is where the size is calculated: https://github.com/stackhpc/drac/blob/master/library/drac.py#L528, as the minimum of all physical disks. We then pass this into the virtual disk creation, multiplied by the span depth: https://github.com/stackhpc/drac/blob/master/library/drac.py#L594.
For the size PB :
I tried to do a RAID5 with 4x1,7To pdisk and I set a span_depth of 1 so it created
min_size_mb x 1 = 1,7To
and if I set a span_depth of 3 which is my goal I get an error : Provided Physical disk not valid for this operation.
For RAID5 with 4 disks, don't you need a 2x2 configuration? span_depth = 2, span_length = 2?
Actually, maybe not.
I think I should have : span_lengt =4 and span_depth =1 but this gives me a wrong vd size .. I've x 3 the min_size_mb var to do the trick ..
Oh, so the problem is that it's not accounting for the parity data when calculating the size?
I think that the bug is the span_depth level that should accept a value of 3. This is dealed by the dracclient python module. To overcome this I temporaly did min_size_mb x 3.
So we need a new way to calculate the size when it is RAID5 (or 4?):
size = min_size_mb * (span_length - 1)
Is that correct?
Hi, sorry week-end delay .. This would work for raid 1 and raid 5 but not for raid 0 and 6 ?
I would only apply that logic to RAID configurations that use a parity disk - 4 and 5. Other configurations could use the existing logic. More generally it could be:
parity_disks = 2 if RAID == 6 else 1 if RAID in (4, 5) else 0
length = span_length - parity_disks
depth = span_depth
size = min_size_mb * length * depth
parity_disks = 2 if RAID == 6 else 1 if RAID in (1, 5) else 0
length = span_length - parity_disks
size = min_size_mb * length
RAID 1 does not have any parity disks, it mirrors the data. We need to multiply by depth to allow for nested RAID, such as RAID 10. We're also not really catering for mirrored setups i.e. RAID 0.
parity_disks = 2 if RAID == 6 else 1 if RAID in (3, 4, 5) else 0
length = span_length - parity_disks
effective_length = 1 if RAID in (1, 10) else length
size = min_size_mb * effective_length * span_depth
Sound Good to me !!
I've some issue with this hardware : When I apply this playbook :
hosts: all gather_facts: no roles:
I must first force the disk from NON-RAID to RAID manually. Because otherwise I get :
The full traceback is: File "/tmp/ansible_A0_fAK/ansible_module_drac.py", line 1048, in commit_raid bmc.commit_pending_raid_changes(controller, False) File "/usr/lib/python2.7/site-packages/dracclient/client.py", line 478, in commit_pending_raid_changes cim_name='DCIM:RAIDService', target=raid_controller, reboot=reboot) File "/usr/lib/python2.7/site-packages/dracclient/resources/job.py", line 151, in create_config_job expected_return_value=utils.RET_CREATED) File "/usr/lib/python2.7/site-packages/dracclient/client.py", line 673, in invoke raise exceptions.DRACOperationFailed(drac_messages=messages)
And When I've done this manual change and I launch my playbook, it creates a wrong virtual volume size for the RAID5 DATA (4 x 1716352,MB = 1716351MB ).