openzfs / zfs

OpenZFS on Linux and FreeBSD
https://openzfs.github.io/openzfs-docs
Other
10.53k stars 1.74k forks source link

ionice support #14151

Open haarp opened 1 year ago

haarp commented 1 year ago

Describe the feature would like to see added to OpenZFS

Hello,

the CFQ and BFQ Linux IO schedulers are capable if managing IO classes and priorities with the ionice tool. Using this one can control how the scheduler handles processes. Long-running bulk copy jobs, background backups, maintenance tasks and similar things can be put into the -c3 (idle) class, so they won't interfere with more interactive loads. Latency-sensitive processes like databases can be put into a higher priority, maybe even the realtime class.

Services can be distinguished based on their priority and "need for interactivity/throughput". Basically, nice but for io. It's super useful.

I'm very surprised to find no issues or discussion regarding ionice in ZFS. It's obviously not using CFQ/BFQ, but its own ZIO scheduler (and leaving the vdev ones at noop/deadline). It does not speak ionice, wasting this precious opportunity. Is there a reason for this? Was it ever considered? Why/why not?

How will this feature improve OpenZFS?

The same way it improves other fs running on disks with the cfq/bfq scheduler. By prioritizing processes, latency and throughput can be greatly improved in mixed-workload cases. Useful on the desktop and the server.

Additional context

Here's a simple test case.

Thanks a lot!

amotin commented 1 year ago

It would be nice (hehe ;) ) if you'd use some more real-world benchmark. I suspect that -i and -d combination in your stress command creates a heavy stream of synchronous writes. And since ZFS is very serious about sync guaranties, those requests are propagated to the disk, which just dies under such workload.

CMCDragonkai commented 10 months ago

I've noticed when using ZFS, certain programs can hog all the IO, and there's no fairness at all. It kind of sucks when there's BIG IO going on in the background that ends up locking up all the foreground programs.

I checked my Linux IO schedulers, I'm setting all of them to none to see what happens now. They were previously doing mq-deadline for rpool and none for the NVME drive that does ZIL + L2ARC.

CrackerJackMack commented 9 months ago

I've noticed when using ZFS, certain programs can hog all the IO, and there's no fairness at all. It kind of sucks when there's BIG IO going on in the background that ends up locking up all the foreground programs.

I checked my Linux IO schedulers, I'm setting all of them to none to see what happens now. They were previously doing mq-deadline for rpool and none for the NVME drive that does ZIL + L2ARC.

~ionice has no effect on ZFS which is why the io scheduler is set to none on disks used by ZFS because it uses it's own scheduler. ionice, as I understand it, only affects the CFQ scheduler.~

Disregard, I'm dumb. Replied thinking this was the systemd issue. :)