tuxera / ntfs-3g

NTFS-3G Safe Read/Write NTFS Driver
https://www.tuxera.com/company/open-source
GNU General Public License v2.0
966 stars 146 forks source link

What are the optimal ntfs-3g or kernel parameters/tunables for use with SMR media? #69

Open sedimentation-fault opened 1 year ago

sedimentation-fault commented 1 year ago

You can see from the title that this is NOT a bug. Please flag it as discussion. Maybe, if we reach some result, you can put it in the Wiki.

Description

I have searched low and high, but could practically find nothing about what ntfs-3g parameters and/or kernel I/O scheduler tunables to use when dealing with SMR (Shingled Magnetic Recording) media. In my situation, the SMR media are 2.5" external USB3 HDDs (not SSDs), encrypted with Truecrypt or Veracrypt. When I try to initially fill such a disk with backups, basically consisting of thousands of small (500K-2M) files, with, say, rsync, the write throughput slows down after the first 10-20 Gigabytes to a crawling 1MB/sec (one _Mega_byte per second)!

System

Gentoo Linux, 5.4.196 kernel, lots of RAM, ntfs3g-2022.10.3.

Suggestions

Searching around, I found (I should say stumbled, as it was rather by chance - I was searching for "How to increase file cache with ntfs-3g", rather than "Which I/O scheduler tunables to use with ntfs-3g and SMR media", which brings up nothing of interest in my search engine of choice), the following suggestion in Can I configure my Linux system for more aggressive file system caching?:


#!/bin/bash
modprobe bfq
for d in /sys/block/sd?; do
  # HDD (tuned for Seagate SMR drive)
  echo bfq >"$d/queue/scheduler"
  echo 4 >"$d/queue/nr_requests"
  echo 32000 >"$d/queue/iosched/back_seek_max"
  echo 3 >"$d/queue/iosched/back_seek_penalty"
  echo 80 >"$d/queue/iosched/fifo_expire_sync"
  echo 1000 >"$d/queue/iosched/fifo_expire_async"
  echo 5300 >"$d/queue/iosched/slice_idle_us"
  echo 1 >"$d/queue/iosched/low_latency"
  echo 200 >"$d/queue/iosched/timeout_sync"
  echo 0 >"$d/queue/iosched/max_budget"
  echo 1 >"$d/queue/iosched/strict_guarantees"
done

However, this is not directly applicable in my case: it is for the bfq scheduler, while I have the mq-deadline scheduler.

What I tried

For me the tunables in question were the tunables of mq-deadline:

for l in c d; do  echo "+++"; echo "sd$l"; echo "+++"; \
echo ''; d=/sys/block/sd$l/queue; echo -n "Scheduler: "; \
cat "$d/scheduler"; echo '-----------------------------'; \
echo -n "nr_requests: "; cat $d/nr_requests; echo '';\
 for v in $(ls $d/iosched/); do echo -n "$v:";\
 cat $d/iosched/$v; echo ''; done; echo ''; echo ''; done

+++
sdc
+++

Scheduler: [mq-deadline] none
-----------------------------
nr_requests: 2

fifo_batch:16

front_merges:1

read_expire:500

write_expire:5000

writes_starved:2

+++
sdd
+++

Scheduler: [mq-deadline] none
-----------------------------
nr_requests: 2

fifo_batch:16

front_merges:1

read_expire:500

write_expire:5000

writes_starved:2

I have tried to set them to higher values:

fifo_batch=65536
nr_requests=32
write_expire=30000

I also set the kernel virtual memory management tunables:

dirty_background_ratio=30
dirty_ratio=50
dirty_expire_centisecs=72000

sysctl vm.dirty_background_ratio=$dirty_background_ratio
sysctl vm.dirty_ratio=$dirty_ratio
sysctl vm.dirty_expire_centisecs=$dirty_expire_centisecs

The guiding idea behind all these settings was: increase file cache, keep the copied files in the cache as long as possible, increase the write batch - in general: write as many bytes as you can in one batch to the drive. The hope was that the more data I would write at once, the more consecutive I/O writes I would send to the drive, increasing the chance that the "zones" (pieces of 256MB consecutive data) would be written sequentially. Remember, the drives may be encrypted, which increases the entropy (data that were neighbors in the source filesystem, will be scattered around randomly). Since the data looks random to the drive, I hoped to gather as many consecutive ones as possible by gathering them in the file cache or a "write batch".

In line with the above considerations I also

At this point I am out of ideas. What would you suggest? Is anything that can be done to increase write performance of drive-managed SMR disks with ntfs-3g?

sedimentation-fault commented 1 year ago

On a side-note, the above settings do help in an XFS filesystem to increase SMR write performance to anywhere between 10 and 40MB/sec - that's huge compared to 1MB/sec for NTFS!