Closed tehniemer closed 1 year ago
Can I get a bit more background from this scenario? Have you experienced parts of the array never being scrubbed?
I had status reports that were consistently reporting that some percentage of my array remained unscrubbed, from 10-15% on average. This is what they looked like:
SnapRAID Status
Self test...
Loading state from /srv/dev-disk-by-uuid-7c66e24c-1ae0-4cd6-a727-f9a42bfe09e3/snapraid.content...
WARNING! With 5 disks it's recommended to use two parity levels.
Using 2597 MiB of memory for the file-system.
SnapRAID status report:
Files Fragmented Excess Wasted Used Free Use Name
Files Fragments GB GB GB
34348 5632 21151 - 7057 2711 72% D2
79691 4208 20232 - 7116 2705 72% D1
37094 5318 22152 - 7135 2691 72% D3
42469 3936 20359 - 7126 2703 72% D4
47209 2807 14956 -96.6 9081 2716 76% D5
240811 21901 98850 0.0 37517 13527 73%
18%| o * o
| * * o
|o * * o
|* o * * *
|* * * * *
|* * * * *
|* * * * *
9%|* * * * *
|* * o * * *
|* * * * * *
|* * * * * o *
|* * * * * o *
|* * * * * o *
|* * * * * o o *
0%|*______*_______*_______*______o*______o_______o_______o_______o______*
9 days ago of the last scrub/sync 0
The oldest block was scrubbed 9 days ago, the median 6, the newest 0.
No sync is in progress.
The 15% of the array is not scrubbed.
You have 29 files with zero sub-second timestamp.
Run the 'touch' command to set it to a not zero value.
No rehash is in progress or needed.
No error detected.
My array undergoes fairly significant change on a daily basis, and I figured out that the unscrubbed percentage was what had changed and was not included in the default scrub options, adding the option to additionally scrub new blocks brought my array into fully scrubbed status, this is the status after adding the additional option:
SnapRAID Status
Self test...
Loading state from /srv/dev-disk-by-uuid-7c66e24c-1ae0-4cd6-a727-f9a42bfe09e3/snapraid.content...
WARNING! With 5 disks it's recommended to use two parity levels.
Using 2611 MiB of memory for the file-system.
SnapRAID status report:
Files Fragmented Excess Wasted Used Free Use Name
Files Fragments GB GB GB
34457 5647 21370 - 7074 2683 72% D2
80403 4237 20724 - 7146 2671 72% D1
37430 5223 22510 - 7210 2691 72% D3
42585 3935 20578 - 7163 2661 72% D4
47421 2823 15350 -91.9 9140 2662 77% D5
242296 21865 100532 0.0 37734 13370 73%
17%| *
| *
| *
| *
| *
| *
| * *
8%| * *
| * * * * *
| * * * * * * *
| * * * * * * * * * * * *
|* * * * * * * * * * * * *
|* * * * * * * * * * * * *
|* * * * * * * * * * * * *
0%|*__*___*___*___*___*___*___*___*__**___*__*___*___*___*___*___*___*__*
18 days ago of the last scrub/sync 0
The oldest block was scrubbed 18 days ago, the median 7, the newest 0.
No sync is in progress.
The full array was scrubbed at least one time.
You have 6 files with zero sub-second timestamp.
Run the 'touch' command to set it to a not zero value.
No rehash is in progress or needed.
No error detected.
After adding this I've also been able to set the scrub percent and age to lower and higher values respectively and drastically reduced my nightly scrub time.
Interesting. My array is only 2% not scrubbed, but mine does not changes as frequently as yours.
What is the time taken to scrub the new parity compared to the whole process?
The whole scrub process including both new and old takes about 1:30 on average. I have my percent and age set to 5% and 15 days. Previously, without scrubbing new, I was using about 15% and 5 days and it was taking about 4 hours.
Thanks for the latest updates. This really needs to be merged! PS: I want to release v3.2 in main so I will adjust the version later
Hey @tehniemer, where are all of your changes? I finally have some time to look at them, but there's only a doc correction to make. Have you moved them to your dev branch?
@auanasgheps Odd, not sure what happened, I'll open a new PR.
This will scrub anything new that was added to the array after a sync, it solves the issue of some percentage of the array that hasn't reached the age threshold remaining unscrubbed for those arrays that are changing often.