Closed firasdib closed 10 months ago
Can you give more info please? And post the log output if possible
Here's the log:
##[SEVERE WARNING] SYNC job ran but did not complete successfully (SnapRAID on firasnas)
SnapRAID Script Job started [Sun 24 Sep 2023 07:00:01 AM CEST]
Running SnapRAID version 12.2
SnapRAID AIO Script version 3.3
Using configuration file: /script-config.sh
----------------------------------------
## Preprocessing
Discord notification is enabled.
The script has detected SnapRAID is already running. Please check the status of the previous SnapRAID job before running this script again.
Email address is set. Sending email report to **EMAIL** [Sun 24 Sep 2023 07:00:02 AM CEST]
DISK1 0% |
DISK2 92% | ****************************************
DISK3 0% |
DISK4 0% |
DISK5 0% |
parity 1% |
raid 4% | *
hash 1% |
sched 0% |
misc 0% |
|_____________________________________________
wait time (total, less is better)
SCRUB - Everything OK
Saving state to /DISK1/snapraid.content...
Saving state to /DISK2/snapraid.content...
Saving state to /DISK3/snapraid.content...
Saving state to /DISK4/snapraid.content...
Saving state to /DISK5/snapraid.content...
Verifying...
Verified /DISK5/snapraid.content in 8 seconds
Verified /DISK2/snapraid.content in 9 seconds
Verified /DISK4/snapraid.content in 9 seconds
Verified /DISK3/snapraid.content in 19 seconds
Verified /DISK1/snapraid.content in 32 seconds
SCRUB finished [Sun Sep 24 07:46:48 CEST 2023]
----------------------------------------
## Postprocessing
### SnapRAID Smart
SnapRAID SMART report:
Temp Power Error FP Size
C OnDays Count TB Serial Device Disk
-----------------------------------------------------------------------
36 1012 0 11% 14.0 XXXXXXXX /dev/sdb DISK1
37 118 0 4% 18.0 XXXXXXXX /dev/sdc DISK2
38 25 0 4% 18.0 XXXXXXXX /dev/sdf DISK3
34 22 0 5% 18.0 XXXXXXXX /dev/sdg DISK4
30 2 0 5% 18.0 XXXXXXXX /dev/sda DISK5
37 138 0 4% 18.0 XXXXXXXX /dev/sde parity
28 174 0 SSD 0.3 XXXXXXXX /dev/sdd -
- - 0 - - XXXXXXXX /dev/nvme0n1 -
The FP column is the estimated probability (in percentage) that the disk
is going to fail in the next year.
Probability that at least one disk is going to fail in the next year is 27%.
WARNING! With 5 disks it's recommended to use two parity levels.
### SnapRAID Status
Self test...
Loading state from /DISK1/snapraid.content...
Using 2538 MiB of memory for the file-system.
SnapRAID status report:
Files Fragmented Excess Wasted Used Free Use Name
Files Fragments GB GB GB
13805 124 609 - 6086 6537 48% DISK1
3718 74 278 - 10418 6066 63% DISK2
11472 230 1318 - 10669 5474 66% DISK3
2554 16 118 - 10875 5524 66% DISK4
3 0 0 -2.3 0 17870 0% DISK5
--------------------------------------------------------------------------
31552 444 2323 0.0 38049 41473 47%
42%| * o
| * *
| * *
| * *
| * *
| * *
| * *
21%| * *
| * *
| * *
| * *
| * *
| * *
| * * *
0%|*__**__*___*___*___*__*__*___*___*___**_*___________*_______o___*__*_*
19 days ago of the last scrub/sync 0
The oldest block was scrubbed 19 days ago, the median 1, the newest 0.
No sync is in progress.
The 1% of the array is not scrubbed.
No file has a zero sub-second timestamp.
No rehash is in progress or needed.
No error detected.
curl: (22) The requested URL returned error: 400
All jobs ended. [Sun Sep 24 07:48:08 CEST 2023]
----------------------------------------
## Total time elapsed for SnapRAID: 20hrs 19min 34sec
Email address is set. Sending email report to **EMAIL** [Sun Sep 24 07:48:08 CEST 2023]
The script has detected SnapRAID is already running.
Here's why!
This doesn't look right though. The script stops everything if snapraid is running. Are you sure you posted the correct part of the log?
But it wasnt. Check the end of the scriptVänliga hälsningar / Best regards,Firas DibOn 24 Sep 2023, at 17:27, Oliver Cervera @.***> wrote: The script has detected SnapRAID is already running. Here's why!
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>
Also, the warning about 5 drives is sent to stderr, so maybe thats also a problem. Vänliga hälsningar / Best regards,Firas DibOn 24 Sep 2023, at 17:27, Oliver Cervera @.***> wrote: The script has detected SnapRAID is already running. Here's why!
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>
let's do things in order.
The script checks if snapraid is already running, by simply running the following: pgrep -x snapraid
If you run the command manually, what do you see?
Yeah, that seems to return the pid of the process if running, which is correct.
Not sure how it ended up claiming snapraid was running, but then still produced a log somehow.
Either way, I ended up creating my own script here https://github.com/firasdib/snapper to avoid spamming you with my personal requests etc :P
Thank you for all you've done!
You are not spamming, comments like yours help improve my script :)
Not sure how it ended up claiming snapraid was running, but then still produced a log somehow.
It's a mystery to me too. I'm not a coder and I do this for fun, but the code is simple, notify if snapraid is running then exit. Your title contains another message, so I am thinking that the script overlapped itself, somehow. It's an edge case, so I hope doesn't happen again in the future, because it would be a nightmare to troubleshoot.
Good luck with your script!
I get this error, and I believe it's incorrect. I am running an unrecommended setup, where I have one parity for 5 drives, and it outputs a warning for each command. I think this is confusing the script.