geerlingguy / arm-nas

Arm NAS configuration with ZFS.
GNU General Public License v3.0
121 stars 6 forks source link

Configure array of U.2 drives for SSD pool instead of SATA SSDs? #4

Open geerlingguy opened 8 months ago

geerlingguy commented 8 months ago

In my quest to max out the 10G link for my editing pool, I would like to investigate using an array of U.2 NVMe drives instead of the Samsung QVO SATA SSDs I'm currently using.

Parts I would need:

If I did this upgrade, I'd likely yank the 4x SATA SSDs so I can keep those drive bays clear for future HDD vdev expansion.

Tasks:

geerlingguy commented 8 months ago

I did contact Kioxia, and they may send over something a bit more substantial than the PM6! Heh.

geerlingguy commented 6 months ago

Mmm... CD8-R 15TB x2 installed, will test performance soon. The 4x SATA SSDs now live in my Pi 5 backup NAS, which has a full replica of the HL15.

nas-19

nas-21

geerlingguy commented 1 month ago
jgeerling@nas01:~$ zpool status -v nvmepool
  pool: nvmepool
 state: ONLINE
  scan: none requested
config:

    NAME                                       STATE     READ WRITE CKSUM
    nvmepool                                   ONLINE       0     0     0
      mirror-0                                 ONLINE       0     0     0
        nvme-KIOXIA_KCD8XRUG15T3_8240A01KTY97  ONLINE       0     0     0
        nvme-KIOXIA_KCD8XRUG15T3_8240A01MTY97  ONLINE       0     0     0

errors: No known data errors
geerlingguy commented 1 month ago

Quick performance baseline, comparing the HDD pool to the NVMe pool:

HDD Pool

| Benchmark                  | Result |
| -------------------------- | ------ |
| iozone 4K random read      | 752.48 MB/s |
| iozone 4K random write     | 230.74 MB/s |
| iozone 1M random read      | 7763.99 MB/s |
| iozone 1M random write     | 1646.86 MB/s |
| iozone 1M sequential read  | 7787.13 MB/s |
| iozone 1M sequential write | 1438.63 MB/s |

NVMe Pool

| Benchmark                  | Result |
| -------------------------- | ------ |
| iozone 4K random read      | 736.14 MB/s |
| iozone 4K random write     | 269.40 MB/s |
| iozone 1M random read      | 7362.35 MB/s |
| iozone 1M random write     | 3694.74 MB/s |
| iozone 1M sequential read  | 7373.37 MB/s |
| iozone 1M sequential write | 3692.56 MB/s |

Obviously both scores are impacted by ZFS caching. I will only really get a feel for how it works out by accessing the data over the NVMe over my 10G LAN connection to my Mac and seeing how it compares. Hopefully can just saturate that connection 24x7!