openzfs / zfs

OpenZFS on Linux and FreeBSD
https://openzfs.github.io/openzfs-docs
Other
10.65k stars 1.75k forks source link

After simulated EIO ZED does not appear to bring spare online #3695

Closed jsalinasintel closed 6 years ago

jsalinasintel commented 9 years ago

Please let me know if my process if flaws to have ZED bring a spare drive online. This test was designed to simulate EIO errors that ZED would detect and act upon. The detection appears to be working but no action such as downing the drive and bringing the spare online appeared to happen.

Create the file to use as disk  
for i in 1 2 3 4 5 
do 
    losetup /dev/loop$i virtualdisk$i.img
done
Create loop back devices
for i in 1 2 3 4 5 
do 
    losetup /dev/loop$i virtualdisk$i.img
done
/dev/loop1: [0803]:6946818 (/code/simdiskissue/virtualdisk1.img)
/dev/loop2: [0803]:6946819 (/code/simdiskissue/virtualdisk2.img)
/dev/loop3: [0803]:6946821 (/code/simdiskissue/virtualdisk3.img)
/dev/loop4: [0803]:6946820 (/code/simdiskissue/virtualdisk4.img)
/dev/loop5: [0803]:6946822 (/code/simdiskissue/virtualdisk5.img)
Create device mapper enteries 
echo "0 1048576 linear /dev/loop1 0" | dmsetup create sane_dev1
echo "0 1048576 linear /dev/loop3 0" | dmsetup create sane_dev3
echo "0 1048576 linear /dev/loop4 0" | dmsetup create sane_dev4
echo "0 1048576 linear /dev/loop5 0" | dmsetup create sane_dev5
# dmsetup create errdev0
0 261144 linear /dev/loop2 0
261144 5 error
261149 787427 linear /dev/loop2 261139
Now we have: 
lrwxrwxrwx  1 root root      7 Aug 19 14:51 sane_dev1 -> ../dm-0        /dev/loop1
lrwxrwxrwx  1 root root      7 Aug 19 14:51 sane_dev3 -> ../dm-1        /dev/loop3
lrwxrwxrwx  1 root root      7 Aug 19 14:51 sane_dev4 -> ../dm-2        /dev/loop4
lrwxrwxrwx  1 root root      7 Aug 19 14:51 sane_dev5 -> ../dm-3        /dev/loop5
lrwxrwxrwx  1 root root      7 Aug 19 14:59 errdev0 -> ../dm-4          /dev/loop2 

zpool create -f diskerrors raidz /dev/dm-0 /dev/dm-4 /dev/dm-2 /dev/dm-3 spare /dev/dm-1 zfs create diskerrors/coral-simulate-errors

# zpool events 
TIME                           CLASS
Aug 18 2015 15:28:28.478999945 resource.fs.zfs.statechange
Aug 18 2015 15:28:28.478999945 resource.fs.zfs.statechange
Aug 18 2015 15:28:28.478999945 resource.fs.zfs.statechange
Aug 18 2015 15:28:28.478999945 resource.fs.zfs.statechange
Aug 18 2015 15:28:28.937000432 resource.fs.zfs.statechange
Aug 18 2015 15:28:29.409999931 resource.fs.zfs.statechange
Aug 18 2015 15:28:29.461000004 resource.fs.zfs.statechange
Aug 18 2015 15:28:29.465999999 resource.fs.zfs.statechange
Aug 18 2015 15:28:29.962000160 ereport.fs.zfs.config.sync
Aug 18 2015 15:28:31.222000052 ereport.fs.zfs.config.sync
Aug 19 2015 15:06:16.674500795 resource.fs.zfs.statechange
Aug 19 2015 15:06:16.674500795 resource.fs.zfs.statechange
Aug 19 2015 15:06:16.674500795 resource.fs.zfs.statechange
Aug 19 2015 15:06:16.674500795 resource.fs.zfs.statechange
Aug 19 2015 15:06:16.792507403 resource.fs.zfs.statechange
Aug 19 2015 15:06:16.829509475 resource.fs.zfs.statechange
Aug 19 2015 15:06:17.564550634 ereport.fs.zfs.config.sync
# zpool status -v diskerrors
  pool: diskerrors
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    diskerrors  ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        dm-0    ONLINE       0     0     0
        dm-4    ONLINE       0     0     0
        dm-2    ONLINE       0     0     0
        dm-3    ONLINE       0     0     0
    spares
      dm-1      AVAIL   

errors: No known data errors
# /etc/init.d/zfs-zed status
zed (pid  2952) is running...
# ps -aef |grep zed 
root       2952      1  0 Aug18 ?        00:00:00 /sbin/zed -p /var/run/zed.pid
# vi zed.rc
# grep ERROR zed.rc
ZED_SPARE_ON_CHECKSUM_ERRORS=2
ZED_SPARE_ON_IO_ERRORS=1
# /etc/init.d/zfs-zed restart
Stopping ZFS Event Daemon                                  [  OK  ]
Starting ZFS Event Daemon                                  [  OK  ] 
# ./iozone -+d -i 0 -i 1 -i 6 -r 1M -t 2 -s 1G -F /diskerrors/coral-simulate-errors/iozonefile1 /diskerrors/coral-simulate-errors/iozonefile2 
# cd /diskerrors/coral-simulate-errors/
# ls -larth
-rw-r----- 1 root root 272M Aug 19 15:25 iozonefile2
-rw-r----- 1 root root 272M Aug 19 15:25 iozonefile1

after a few seconds of running:

Aug 19 2015 15:25:04.659873304 ereport.fs.zfs.io
Aug 19 2015 15:25:04.659873304 ereport.fs.zfs.io
Aug 19 2015 15:25:04.659873304 ereport.fs.zfs.io

ZFS has detected an io error:

   eid: 18
class: io
  host: onyx-29.onyx.hpdd.intel.com
  time: 2015-08-19 15:25:04-0700
vtype: disk
vpath: /dev/dm-4
vguid: 0x506391A801B3F284
cksum: 0
  read: 0
write: 0
  pool: diskerrors
# zpool status -v diskerrors
  pool: diskerrors
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    diskerrors  ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        dm-0    ONLINE       0     0     0
        dm-4    ONLINE       0     0     0
        dm-2    ONLINE       0     0     0
        dm-3    ONLINE       0     0     0
    spares
      dm-1      AVAIL   

errors: No known data errors

Error writing block 727, fd= 3

Error writing block 732, fd= 3 write: No space left on device

Here is the messages from in /var/logm/messages from this run:

Aug 19 15:06:16 onyx-29 zed[2952]: Invoking "all-syslog.sh" eid=12 pid=16892
Aug 19 15:06:16 onyx-29 zed: eid=12 class=statechange 
Aug 19 15:06:16 onyx-29 zed[2952]: Finished "all-syslog.sh" eid=12 pid=16892 exit=0
Aug 19 15:06:16 onyx-29 zed[2952]: Invoking "all-syslog.sh" eid=11 pid=16894
Aug 19 15:06:16 onyx-29 zed: eid=11 class=statechange 
Aug 19 15:06:16 onyx-29 zed[2952]: Finished "all-syslog.sh" eid=11 pid=16894 exit=0
Aug 19 15:06:16 onyx-29 zed[2952]: Invoking "all-syslog.sh" eid=13 pid=16896
Aug 19 15:06:16 onyx-29 zed: eid=13 class=statechange 
Aug 19 15:06:16 onyx-29 zed[2952]: Finished "all-syslog.sh" eid=13 pid=16896 exit=0
Aug 19 15:06:16 onyx-29 zed[2952]: Invoking "all-syslog.sh" eid=14 pid=16898
Aug 19 15:06:16 onyx-29 zed: eid=14 class=statechange 
Aug 19 15:06:16 onyx-29 zed[2952]: Finished "all-syslog.sh" eid=14 pid=16898 exit=0
Aug 19 15:06:16 onyx-29 zed[2952]: Invoking "all-syslog.sh" eid=15 pid=16900
Aug 19 15:06:16 onyx-29 zed: eid=15 class=statechange 
Aug 19 15:06:16 onyx-29 zed[2952]: Finished "all-syslog.sh" eid=15 pid=16900 exit=0
Aug 19 15:06:16 onyx-29 zed[2952]: Invoking "all-syslog.sh" eid=16 pid=16902
Aug 19 15:06:16 onyx-29 zed: eid=16 class=statechange 
Aug 19 15:06:16 onyx-29 zed[2952]: Finished "all-syslog.sh" eid=16 pid=16902 exit=0
Aug 19 15:06:17 onyx-29 zed[2952]: Invoking "all-syslog.sh" eid=17 pid=16918
Aug 19 15:06:17 onyx-29 zed: eid=17 class=config.sync pool=diskerrors
Aug 19 15:06:17 onyx-29 zed[2952]: Finished "all-syslog.sh" eid=17 pid=16918 exit=0
Aug 19 15:19:46 onyx-29 zed[2952]: Exiting
Aug 19 15:19:46 onyx-29 zed[17126]: ZFS Event Daemon 0.6.4-184_g6bec435 (PID 17126)
Aug 19 15:19:46 onyx-29 zed[17126]: Processing events since eid=17
Aug 19 15:22:38 onyx-29 zed[17126]: Exiting
Aug 19 15:22:38 onyx-29 zed[17176]: ZFS Event Daemon 0.6.4-184_g6bec435 (PID 17176)
Aug 19 15:22:38 onyx-29 zed[17176]: Processing events since eid=17
Aug 19 15:25:04 onyx-29 zed[17176]: Invoking "all-syslog.sh" eid=18 pid=17305
Aug 19 15:25:04 onyx-29 zed: eid=18 class=io pool=diskerrors
Aug 19 15:25:04 onyx-29 zed[17176]: Finished "all-syslog.sh" eid=18 pid=17305 exit=0
Aug 19 15:25:04 onyx-29 zed[17176]: Invoking "io-notify.sh" eid=18 pid=17307
Aug 19 15:25:05 onyx-29 zed[17176]: Finished "io-notify.sh" eid=18 pid=17307 exit=0
Aug 19 15:25:05 onyx-29 zed[17176]: Invoking "io-spare.sh" eid=18 pid=17336
Aug 19 15:25:06 onyx-29 zed[17176]: Finished "io-spare.sh" eid=18 pid=17336 exit=2
Aug 19 15:25:06 onyx-29 zed[17176]: Invoking "all-syslog.sh" eid=19 pid=17338
Aug 19 15:25:06 onyx-29 zed: eid=19 class=io pool=diskerrors
Aug 19 15:25:06 onyx-29 zed[17176]: Finished "all-syslog.sh" eid=19 pid=17338 exit=0
Aug 19 15:25:06 onyx-29 zed[17176]: Invoking "io-notify.sh" eid=19 pid=17341
Aug 19 15:25:06 onyx-29 zed[17176]: Finished "io-notify.sh" eid=19 pid=17341 exit=3
Aug 19 15:25:06 onyx-29 zed[17176]: Invoking "io-spare.sh" eid=19 pid=17354
Aug 19 15:25:06 onyx-29 zed[17176]: Finished "io-spare.sh" eid=19 pid=17354 exit=2
Aug 19 15:25:06 onyx-29 zed[17176]: Invoking "all-syslog.sh" eid=20 pid=17364
Aug 19 15:25:06 onyx-29 zed: eid=20 class=io pool=diskerrors
Aug 19 15:25:06 onyx-29 zed[17176]: Finished "all-syslog.sh" eid=20 pid=17364 exit=0
Aug 19 15:25:06 onyx-29 zed[17176]: Invoking "io-notify.sh" eid=20 pid=17366
Aug 19 15:25:06 onyx-29 zed[17176]: Finished "io-notify.sh" eid=20 pid=17366 exit=3
Aug 19 15:25:06 onyx-29 zed[17176]: Invoking "io-spare.sh" eid=20 pid=17381
Aug 19 15:25:06 onyx-29 zed[17176]: Finished "io-spare.sh" eid=20 pid=17381 exit=2
Aug 19 15:28:59 onyx-29 sssd[be[onyx.hpdd.intel.com]]: dereference processing failed : Input/output error
# cat zed.rc
##
# zed.rc
#
# This file should be owned by root and permissioned 0600.
##

##
# Absolute path to the debug output file.
#
ZED_DEBUG_LOG="/tmp/zed.debug.log"

##
# Email address of the zpool administrator for receipt of notifications;
#   multiple addresses can be specified if they are delimited by whitespace.
# Email will only be sent if ZED_EMAIL_ADDR is defined.
# Disabled by default; uncomment to enable.
#
ZED_EMAIL_ADDR="john.salinas@intel.com"

##
# Name or path of executable responsible for sending notifications via email;
#   the mail program must be capable of reading a message body from stdin.
# Email will only be sent if ZED_EMAIL_ADDR is defined.
#
#ZED_EMAIL_PROG="mail"

##
# Command-line options for ZED_EMAIL_PROG.
# The string @ADDRESS@ will be replaced with the recipient email address(es).
# The string @SUBJECT@ will be replaced with the notification subject;
#   this should be protected with quotes to prevent word-splitting.
# Email will only be sent if ZED_EMAIL_ADDR is defined.
#
#ZED_EMAIL_OPTS="-s '@SUBJECT@' @ADDRESS@"

##
# Default directory for zed lock files.
#
#ZED_LOCKDIR="/var/lock"

##
# Minimum number of seconds between notifications for a similar event.
#
#ZED_NOTIFY_INTERVAL_SECS=3600

##
# Notification verbosity.
#   If set to 0, suppress notification if the pool is healthy.
#   If set to 1, send notification regardless of pool health.
#
ZED_NOTIFY_VERBOSE=0

##
# Pushbullet access token.
# This grants full access to your account -- protect it accordingly!
#   <https://www.pushbullet.com/get-started>
#   <https://www.pushbullet.com/account>
# Disabled by default; uncomment to enable.
#
#ZED_PUSHBULLET_ACCESS_TOKEN=""

##
# Pushbullet channel tag for push notification feeds that can be subscribed to.
#   <https://www.pushbullet.com/my-channel>
# If not defined, push notifications will instead be sent to all devices
#   associated with the account specified by the access token.
# Disabled by default; uncomment to enable.
#
#ZED_PUSHBULLET_CHANNEL_TAG=""

##
# Default directory for zed state files.
#
#ZED_RUNDIR="/var/run"

##
# Replace a device with a hot spare after N checksum errors are detected.
# Disabled by default; uncomment to enable.
#
ZED_SPARE_ON_CHECKSUM_ERRORS=2

##
# Replace a device with a hot spare after N I/O errors are detected.
# Disabled by default; uncomment to enable.
#
ZED_SPARE_ON_IO_ERRORS=1

##
# The syslog priority (e.g., specified as a "facility.level" pair).
#
#ZED_SYSLOG_PRIORITY="daemon.notice"

##
# The syslog tag for marking zed events.
#
#ZED_SYSLOG_TAG="zed"  

Also it does not appear that the debug log exists:

# ls -lart /tmp/*zed*
ls: cannot access /tmp/*zed*: No such file or directory

In case it just needed more errors I re-ran the same iozone command a couple of more times to generate more errors:

Original:
Aug 19 2015 15:25:04.659873304 ereport.fs.zfs.io
Aug 19 2015 15:25:04.659873304 ereport.fs.zfs.io
Aug 19 2015 15:25:04.659873304 ereport.fs.zfs.io
New:
Aug 19 2015 16:34:26.910640705 ereport.fs.zfs.io
Aug 19 2015 16:34:26.910640705 ereport.fs.zfs.io
Aug 19 2015 16:34:26.910640705 ereport.fs.zfs.io
Aug 19 2015 16:37:59.583567708 ereport.fs.zfs.io
Aug 19 2015 16:37:59.583567708 ereport.fs.zfs.io
Aug 19 2015 16:37:59.583567708 ereport.fs.zfs.io 

New messages in /var/log/messages:

Aug 19 15:28:59 onyx-29 sssd[be[onyx.hpdd.intel.com]]: dereference processing failed : Input/output error
Aug 19 16:34:26 onyx-29 zed[17176]: Invoking "all-syslog.sh" eid=21 pid=29424
Aug 19 16:34:26 onyx-29 zed: eid=21 class=io pool=diskerrors
Aug 19 16:34:26 onyx-29 zed[17176]: Finished "all-syslog.sh" eid=21 pid=29424 exit=0
Aug 19 16:34:26 onyx-29 zed[17176]: Invoking "io-notify.sh" eid=21 pid=29432
Aug 19 16:34:26 onyx-29 zed[17176]: Finished "io-notify.sh" eid=21 pid=29432 exit=0
Aug 19 16:34:26 onyx-29 zed[17176]: Invoking "io-spare.sh" eid=21 pid=29459
Aug 19 16:34:26 onyx-29 zed[17176]: Finished "io-spare.sh" eid=21 pid=29459 exit=2
Aug 19 16:34:27 onyx-29 zed[17176]: Invoking "all-syslog.sh" eid=22 pid=29461
Aug 19 16:34:27 onyx-29 zed: eid=22 class=io pool=diskerrors
Aug 19 16:34:27 onyx-29 zed[17176]: Finished "all-syslog.sh" eid=22 pid=29461 exit=0
Aug 19 16:34:27 onyx-29 zed[17176]: Invoking "io-notify.sh" eid=22 pid=29463
Aug 19 16:34:27 onyx-29 zed[17176]: Finished "io-notify.sh" eid=22 pid=29463 exit=3
Aug 19 16:34:27 onyx-29 zed[17176]: Invoking "io-spare.sh" eid=22 pid=29477
Aug 19 16:34:27 onyx-29 zed[17176]: Finished "io-spare.sh" eid=22 pid=29477 exit=2
Aug 19 16:34:27 onyx-29 zed[17176]: Invoking "all-syslog.sh" eid=23 pid=29479
Aug 19 16:34:27 onyx-29 zed: eid=23 class=io pool=diskerrors
Aug 19 16:34:27 onyx-29 zed[17176]: Finished "all-syslog.sh" eid=23 pid=29479 exit=0
Aug 19 16:34:27 onyx-29 zed[17176]: Invoking "io-notify.sh" eid=23 pid=29482
Aug 19 16:34:27 onyx-29 zed[17176]: Finished "io-notify.sh" eid=23 pid=29482 exit=3
Aug 19 16:34:27 onyx-29 zed[17176]: Invoking "io-spare.sh" eid=23 pid=29504
Aug 19 16:34:27 onyx-29 zed[17176]: Finished "io-spare.sh" eid=23 pid=29504 exit=2
Aug 19 16:37:59 onyx-29 zed[17176]: Invoking "all-syslog.sh" eid=24 pid=33659
Aug 19 16:37:59 onyx-29 zed: eid=24 class=io pool=diskerrors
Aug 19 16:37:59 onyx-29 zed[17176]: Finished "all-syslog.sh" eid=24 pid=33659 exit=0
Aug 19 16:37:59 onyx-29 zed[17176]: Invoking "io-notify.sh" eid=24 pid=33661
Aug 19 16:37:59 onyx-29 zed[17176]: Finished "io-notify.sh" eid=24 pid=33661 exit=3
Aug 19 16:37:59 onyx-29 zed[17176]: Invoking "io-spare.sh" eid=24 pid=33674
Aug 19 16:37:59 onyx-29 zed[17176]: Finished "io-spare.sh" eid=24 pid=33674 exit=2
Aug 19 16:38:00 onyx-29 zed[17176]: Invoking "all-syslog.sh" eid=25 pid=33684
Aug 19 16:38:00 onyx-29 zed: eid=25 class=io pool=diskerrors
Aug 19 16:38:00 onyx-29 zed[17176]: Finished "all-syslog.sh" eid=25 pid=33684 exit=0
Aug 19 16:38:00 onyx-29 zed[17176]: Invoking "io-notify.sh" eid=25 pid=33686
Aug 19 16:38:01 onyx-29 zed[17176]: Finished "io-notify.sh" eid=25 pid=33686 exit=3
Aug 19 16:38:01 onyx-29 zed[17176]: Invoking "io-spare.sh" eid=25 pid=33699
Aug 19 16:38:01 onyx-29 zed[17176]: Finished "io-spare.sh" eid=25 pid=33699 exit=2
Aug 19 16:38:01 onyx-29 zed[17176]: Invoking "all-syslog.sh" eid=26 pid=33700
Aug 19 16:38:01 onyx-29 zed: eid=26 class=io pool=diskerrors
Aug 19 16:38:01 onyx-29 zed[17176]: Finished "all-syslog.sh" eid=26 pid=33700 exit=0
Aug 19 16:38:01 onyx-29 zed[17176]: Invoking "io-notify.sh" eid=26 pid=33703
Aug 19 16:38:01 onyx-29 zed[17176]: Finished "io-notify.sh" eid=26 pid=33703 exit=3
Aug 19 16:38:01 onyx-29 zed[17176]: Invoking "io-spare.sh" eid=26 pid=33716
Aug 19 16:38:01 onyx-29 zed[17176]: Finished "io-spare.sh" eid=26 pid=33716 exit=2 
# zpool status -v diskerrors
  pool: diskerrors
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    diskerrors  ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        dm-0    ONLINE       0     0     0
        dm-4    ONLINE       0     0     0
        dm-2    ONLINE       0     0     0
        dm-3    ONLINE       0     0     0
    spares
      dm-1      AVAIL   

errors: No known data errors 

It did not appear I was able to trigger a hot spare coming on line and downing of drive taking errors.

loli10K commented 6 years ago

This seems to be working (should also be tested by the buildbots):

root@linux:~# cat /sys/module/zfs/version 
0.7.0-181_g62df1bc
root@linux:~# /etc/init.d/zfs-zed restart ; /etc/init.d/zfs-zed status
Restarting zfs-zed (via systemctl): zfs-zed.service.
● zfs-zed.service - ZFS Event Daemon (zed)
   Loaded: loaded (/usr/lib/systemd/system/zfs-zed.service; disabled)
   Active: active (running) since Sun 2017-11-19 12:01:40 CET; 15ms ago
     Docs: man:zed(8)
 Main PID: 570 ((zed))
   CGroup: /system.slice/zfs-zed.service
           └─570 (zed)
root@linux:~# for i in 1 2 3 4 5 
> do 
>     truncate -s 128m $tmpdir/virtualdisk$i.img
>     losetup /dev/loop$i $tmpdir/virtualdisk$i.img
> done
root@linux:~# echo "0 131072 linear /dev/loop1 0" | dmsetup create sanedev1
root@linux:~# echo "0 131072 linear /dev/loop3 0" | dmsetup create sanedev2
root@linux:~# echo "0 131072 linear /dev/loop4 0" | dmsetup create sanedev3
root@linux:~# echo "0 131072 linear /dev/loop5 0" | dmsetup create sanedev4
root@linux:~# echo "0 65536 linear /dev/loop2 0
> 65536 5 error
> 65541 65531 linear /dev/loop2 65541" | dmsetup create errdev1
root@linux:~# zpool create -f diskerrors raidz /dev/mapper/sanedev1 /dev/mapper/sanedev2 /dev/mapper/errdev1 /dev/mapper/sanedev4 spare /dev/mapper/sanedev3
root@linux:~# zpool status -v
  pool: diskerrors
 state: ONLINE
  scan: none requested
config:

    NAME          STATE     READ WRITE CKSUM
    diskerrors    ONLINE       0     0     0
      raidz1-0    ONLINE       0     0     0
        sanedev1  ONLINE       0     0     0
        sanedev2  ONLINE       0     0     0
        errdev1   ONLINE       0     0     0
        sanedev4  ONLINE       0     0     0
    spares
      sanedev3    AVAIL   

errors: No known data errors
root@linux:~# zpool events -c
cleared 21 events
root@linux:~# dd if=/dev/zero of=/diskerrors/data.bin &
[1] 866
root@linux:~# zpool events -f
TIME                           CLASS
Nov 19 2017 12:02:34.680000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.684000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.684000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.812000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.812000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.812000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.812000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.812000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.812000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.824000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.824000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.824000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.824000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.824000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.824000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.828000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.828000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.828000000 ereport.fs.zfs.io
Nov 19 2017 12:02:34.828000000 ereport.fs.zfs.io
Nov 19 2017 12:02:51.112000000 resource.fs.zfs.statechange
Nov 19 2017 12:02:54.176000000 sysevent.fs.zfs.config_sync
Nov 19 2017 12:02:54.204000000 sysevent.fs.zfs.vdev_spare
Nov 19 2017 12:02:54.204000000 sysevent.fs.zfs.vdev_attach
Nov 19 2017 12:02:56.404000000 sysevent.fs.zfs.resilver_start
Nov 19 2017 12:02:56.404000000 sysevent.fs.zfs.history_event
^C
root@linux:~# zpool status
  pool: diskerrors
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
    repaired.
  scan: resilvered 9.30M in 0h0m with 0 errors on Sun Nov 19 12:03:00 2017
config:

    NAME            STATE     READ WRITE CKSUM
    diskerrors      DEGRADED     0     0     0
      raidz1-0      DEGRADED     0     0     0
        sanedev1    ONLINE       0     0     0
        sanedev2    ONLINE       0     0     0
        spare-2     DEGRADED     0     0     0
          errdev1   FAULTED      0     0     0  too many errors
          sanedev3  ONLINE       0     0     0
        sanedev4    ONLINE       0     0     0
    spares
      sanedev3      INUSE     currently in use

errors: No known data errors
root@linux:~#