Closed goobags closed 4 months ago
And I just put the original drive back in now it's crashed (the one drive not the entire RAID array) and cannot repair due to it being seen as SSD cache only in DSM UI.
I never considered that someone might try to replace an M2 drive to rebuild or expand the RAID. What you wanted to do might have been available from storage manager if you were using DSM 7.2 and you had run https://github.com/007revad/Synology_HDD_db.
Is one of the original small M.2 drives still showing as degraded, or has the whole array crashed?
Just one drive is crashed, the storage pool is still functional. Just trying to prevent having to reinstall a few apps and Docker back on an entirely new storage pool/volume.
I think backing up, then reinstalling a new storage pool/volume may be quickest solution.
It's going to take a while for me to work out how DSM does a RAID repair.
@007revad Thanks for all of the work you've done on your scripts – they're really useful!
I wanted to check in on this issue since I recently had a RAID1 M.2 volume using the official 10G NIC + M.2 adapter card lose a drive. Have you looked at all into allowing a blank disk to be added to an existing volume? If not, I'd be interested in helping get this working if you have some idea of where to start.
@dantrauner
First, make sure your data from the NVMe volume is backed up.
What does the following command return:
sudo synostgpool --auto-repair -h
And this one:
sudo synostgpool --misc --get-pool-info | jq
I only need the nvme section like this:
{
"device_type": "shr_without_disk_protect",
"disks": [
"nvme1n1"
],
"id": "reuse_2",
"is_writable": true,
"num_id": 2,
"pool_path": "reuse_2",
"raids": [
{
"designedDiskCount": 1,
"devices": [
{
"id": "nvme1n1",
"slot": 0,
"status": "normal"
}
],
"hasParity": false,
"minDevSize": "493964574720",
"normalDevCount": 1,
"raidCrashedReason": 0,
"raidPath": "/dev/md3",
"raidStatus": 1,
"spares": []
}
],
"size": {
"total": "489118760960",
"used": "488565112832"
},
"space_path": "/dev/vg2",
"status": "normal",
"summary_status": "normal"
},
@dantrauner
Just now I was able to repair a NVMe RAID 1 storage pool from Storage Manager. For the steps I used to work I need to know a few things about your setup.
Probably 60 seconds before your last reply, I decided to just use this opportunity to practice my DR procedure 😄 I'm bookmarking this and will try to repair next time, but:
For future reference I've created a few wiki pages documenting how I repaired my NVMe RAID 1 after replacing a drive.
Repair M.2 RAID 1 in internal M.2 slots
Repair M.2 RAID 1 in adaptor card - Requires the NAS has Internal M.2 slots.
Repair RAID via SSH - I have not tested this method yet...
The following comes from a blog post on how to create the volume manually (which even cites your script @007revad.) It's a snippet to get the new nvme drive to show up as an option to repair the failed array. in this case, md3 is the group for your existing storage group and the nvme is referenced by /dev.
https://academy.pointtosource.com/synology/synology-ds920-nvme-m2-ssd-volume/
synopartition --part /dev/nvme1n1 12
mdadm --manage /dev/md3 -a /dev/nvme1n1p3
Hi,
I have used this script to set up a RAID1 array (mirrored). I used some old small M.2 drives as testers and have since ordered two bigger drives. I replaced one today, similar to how I have rebuilt RAID1 arrays in the past, just replace a drive the rebuild through the UI. The problem is I cannot get it to work.
Trying to repair the Storage Pool results in an error saying there are no drives that meet the requirements. Clicking the new drive under HDD/SSD doesn't;t let me do anything other than a SSD Cache (for obvious reasons I'm on a DS918+)
Re-running the script only lets me select one drive (the new one) and the script fails to finish.