Closed Rudd-O closed 8 months ago
A good number of commits in that range overlap the state of 2.2. It's a shame you probably can't bisect due to this being already data in peril. Any way you could do that?
If I had a solid reproducer or spare hardware to set an experiment up, I would most certainly run it over here.
Was anything logged by the kernel or zed when the write errors happened?
Please clarify "backup machine": is that a machine/pool you're sending streams to? Or just another unrelated computer?
The system log should have the type of errors can you isolate them and post them here? And zpool events -v
will have more details on the write errors that might help with diagnosis.
Yes, please stand by....
Here is a sample of zpool events + journalctl logs from that time:
Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da1155432600401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0xb1310329a9559f6b
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0xb1310329a9559f6b
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718"
vdev_devid = "dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da114c1b98
vdev_delta_ts = 0x4c160e
vdev_read_errors = 0x0
vdev_write_errors = 0x6d
vdev_cksum_errors = 0x0
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da0f001f0f
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fbb94000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x0
zio_level = 0x0
zio_blkid = 0x1b
time = 0x65560494 0x68a7db
eid = 0x247
Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da1155f72c00401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x9f8a1edbb1089f54
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x9f8a1edbb1089f54
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da114b6eac
vdev_delta_ts = 0x4bd967
vdev_read_errors = 0x0
vdev_write_errors = 0x70
vdev_cksum_errors = 0x2
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da10fea9fd
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fbb9e000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x0
zio_level = 0x1
zio_blkid = 0x1
time = 0x65560494 0x68a7db
eid = 0x248
Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da1156897900401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x9f8a1edbb1089f54
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x9f8a1edbb1089f54
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da114b6eac
vdev_delta_ts = 0x4bd967
vdev_read_errors = 0x0
vdev_write_errors = 0x71
vdev_cksum_errors = 0x2
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da10fe2150
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fbb9d000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x0
zio_level = 0x0
zio_blkid = 0x0
time = 0x65560494 0x68a7db
eid = 0x249
Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da114e33e800401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x9f8a1edbb1089f54
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x9f8a1edbb1089f54
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da114b6eac
vdev_delta_ts = 0x4bd967
vdev_read_errors = 0x0
vdev_write_errors = 0x72
vdev_cksum_errors = 0x2
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da0f096038
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fbb9c000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x3c
zio_level = 0x0
zio_blkid = 0x0
time = 0x65560494 0x68a7db
eid = 0x24a
Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da11515e0100401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x9f8a1edbb1089f54
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x9f8a1edbb1089f54
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da114b6eac
vdev_delta_ts = 0x4bd967
vdev_read_errors = 0x0
vdev_write_errors = 0x73
vdev_cksum_errors = 0x2
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da0f07e20b
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fbb9b000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x38
zio_level = 0x0
zio_blkid = 0x1
time = 0x65560494 0x68a7db
eid = 0x24b
Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da1151dd4d00401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x9f8a1edbb1089f54
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x9f8a1edbb1089f54
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da114b6eac
vdev_delta_ts = 0x4bd967
vdev_read_errors = 0x0
vdev_write_errors = 0x74
vdev_cksum_errors = 0x2
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da0f06269a
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fbb9a000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x34
zio_level = 0x0
zio_blkid = 0x0
time = 0x65560494 0x68a7db
eid = 0x24c
Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da11527bda00401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x9f8a1edbb1089f54
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x9f8a1edbb1089f54
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da114b6eac
vdev_delta_ts = 0x4bd967
vdev_read_errors = 0x0
vdev_write_errors = 0x75
vdev_cksum_errors = 0x2
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da0f054403
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fbb99000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x30
zio_level = 0x0
zio_blkid = 0x1
time = 0x65560494 0x68a7db
eid = 0x24d
Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da1153079700401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x9f8a1edbb1089f54
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x9f8a1edbb1089f54
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da114b6eac
vdev_delta_ts = 0x4bd967
vdev_read_errors = 0x0
vdev_write_errors = 0x76
vdev_cksum_errors = 0x2
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da0f04355d
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fbb98000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x1c
zio_level = 0x0
zio_blkid = 0x0
time = 0x65560494 0x68a7db
eid = 0x24e
Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da115394d600401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x9f8a1edbb1089f54
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x9f8a1edbb1089f54
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da114b6eac
vdev_delta_ts = 0x4bd967
vdev_read_errors = 0x0
vdev_write_errors = 0x77
vdev_cksum_errors = 0x2
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da0f032e66
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fbb97000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x1
zio_level = 0x0
zio_blkid = 0x1
time = 0x65560494 0x68a7db
eid = 0x24f
Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da1154212800401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x9f8a1edbb1089f54
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x9f8a1edbb1089f54
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da114b6eac
vdev_delta_ts = 0x4bd967
vdev_read_errors = 0x0
vdev_write_errors = 0x78
vdev_cksum_errors = 0x2
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da0f024b25
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fbb96000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x0
zio_level = 0x0
zio_blkid = 0xdc
time = 0x65560494 0x68a7db
eid = 0x250
Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da1154b52500401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x9f8a1edbb1089f54
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x9f8a1edbb1089f54
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da114b6eac
vdev_delta_ts = 0x4bd967
vdev_read_errors = 0x0
vdev_write_errors = 0x79
vdev_cksum_errors = 0x2
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da0f01227a
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fbb95000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x0
zio_level = 0x0
zio_blkid = 0x1e
time = 0x65560494 0x68a7db
eid = 0x251
Nov 16 2023 12:01:24.006858715 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da1155432600401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x9f8a1edbb1089f54
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x9f8a1edbb1089f54
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_devid = "dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da114b6eac
vdev_delta_ts = 0x4bd967
vdev_read_errors = 0x0
vdev_write_errors = 0x7a
vdev_cksum_errors = 0x2
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da0f0014cd
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fbb94000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x0
zio_level = 0x0
zio_blkid = 0x1b
time = 0x65560494 0x68a7db
eid = 0x252
Nov 16 2023 12:01:24.011858512 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da11a746ea00401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x942ee94b9fd31420
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x942ee94b9fd31420
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da11a6bc0c
vdev_delta_ts = 0x7367fb
vdev_read_errors = 0x0
vdev_write_errors = 0x77
vdev_cksum_errors = 0x0
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x40080480
zio_stage = 0x2000000
zio_pipeline = 0x2100000
zio_delay = 0x40c81
zio_timestamp = 0x2da0e8793f2
zio_delta = 0x71c1a8
zio_priority = 0x3
zio_offset = 0x414fb6d9000
zio_size = 0x4000
time = 0x65560494 0xb4f250
eid = 0x253
Nov 16 2023 12:01:24.012858472 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da0ee61b6c00401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x942ee94b9fd31420
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x942ee94b9fd31420
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da11a6bc0c
vdev_delta_ts = 0x7367fb
vdev_read_errors = 0x0
vdev_write_errors = 0x78
vdev_cksum_errors = 0x0
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da0e8a91a5
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fb6dc000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x3cf
zio_level = 0x0
zio_blkid = 0x0
time = 0x65560494 0xc43468
eid = 0x254
Nov 16 2023 12:01:24.012858472 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da0ee61b6c00401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x8acc53be158c8a67
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x8acc53be158c8a67
vdev_type = "mirror"
vdev_ashift = 0xc
vdev_complete_ts = 0x0
vdev_delta_ts = 0x0
vdev_read_errors = 0x0
vdev_write_errors = 0x57
vdev_cksum_errors = 0x0
vdev_delays = 0x0
parent_guid = 0x2336a45308072aca
parent_type = "root"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x104080
zio_stage = 0x2000000
zio_pipeline = 0x2100000
zio_delay = 0x0
zio_timestamp = 0x0
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fb2dc000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x3cf
zio_level = 0x0
zio_blkid = 0x0
time = 0x65560494 0xc43468
eid = 0x255
Nov 16 2023 12:01:24.012858472 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da0eea101e00401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x942ee94b9fd31420
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x942ee94b9fd31420
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da11a6bc0c
vdev_delta_ts = 0x7367fb
vdev_read_errors = 0x0
vdev_write_errors = 0x79
vdev_cksum_errors = 0x0
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da0e89bf0e
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fb6db000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x3cb
zio_level = 0x1
zio_blkid = 0x0
time = 0x65560494 0xc43468
eid = 0x256
Nov 16 2023 12:01:24.012858472 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da0eea101e00401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x8acc53be158c8a67
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x8acc53be158c8a67
vdev_type = "mirror"
vdev_ashift = 0xc
vdev_complete_ts = 0x0
vdev_delta_ts = 0x0
vdev_read_errors = 0x0
vdev_write_errors = 0x58
vdev_cksum_errors = 0x0
vdev_delays = 0x0
parent_guid = 0x2336a45308072aca
parent_type = "root"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x104080
zio_stage = 0x2000000
zio_pipeline = 0x2100000
zio_delay = 0x0
zio_timestamp = 0x0
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fb2db000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x3cb
zio_level = 0x1
zio_blkid = 0x0
time = 0x65560494 0xc43468
eid = 0x257
Nov 16 2023 12:01:24.012858472 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da0eebb8fa00401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x942ee94b9fd31420
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x942ee94b9fd31420
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da11a6bc0c
vdev_delta_ts = 0x7367fb
vdev_read_errors = 0x0
vdev_write_errors = 0x7a
vdev_cksum_errors = 0x0
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da0e88e642
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fb6da000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x3cb
zio_level = 0x0
zio_blkid = 0x1
time = 0x65560494 0xc43468
eid = 0x258
Nov 16 2023 12:01:24.012858472 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da0eebb8fa00401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x8acc53be158c8a67
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x8acc53be158c8a67
vdev_type = "mirror"
vdev_ashift = 0xc
vdev_complete_ts = 0x0
vdev_delta_ts = 0x0
vdev_read_errors = 0x0
vdev_write_errors = 0x59
vdev_cksum_errors = 0x0
vdev_delays = 0x0
parent_guid = 0x2336a45308072aca
parent_type = "root"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x104080
zio_stage = 0x2000000
zio_pipeline = 0x2100000
zio_delay = 0x0
zio_timestamp = 0x0
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fb2da000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x3cb
zio_level = 0x0
zio_blkid = 0x1
time = 0x65560494 0xc43468
eid = 0x259
Nov 16 2023 12:01:24.012858472 ereport.fs.zfs.io
class = "ereport.fs.zfs.io"
ena = 0x2da0eed7a3a00401
detector = (embedded nvlist)
version = 0x0
scheme = "zfs"
pool = 0x2336a45308072aca
vdev = 0x942ee94b9fd31420
(end detector)
pool = "chest"
pool_guid = 0x2336a45308072aca
pool_state = 0x0
pool_context = 0x0
pool_failmode = "wait"
vdev_guid = 0x942ee94b9fd31420
vdev_type = "disk"
vdev_path = "/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb"
vdev_ashift = 0xc
vdev_complete_ts = 0x2da11a6bc0c
vdev_delta_ts = 0x7367fb
vdev_read_errors = 0x0
vdev_write_errors = 0x7b
vdev_cksum_errors = 0x0
vdev_delays = 0x0
parent_guid = 0x8acc53be158c8a67
parent_type = "mirror"
vdev_spare_paths =
vdev_spare_guids =
zio_err = 0x5
zio_flags = 0x380080
zio_stage = 0x2000000
zio_pipeline = 0x2e00000
zio_delay = 0x0
zio_timestamp = 0x2da0e8793f2
zio_delta = 0x0
zio_priority = 0x3
zio_offset = 0x414fb6d9000
zio_size = 0x1000
zio_objset = 0x0
zio_object = 0x11d
zio_level = 0x0
zio_blkid = 0x0
time = 0x65560494 0xc43468
eid = 0x25a
Nov 16 12:01:23 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488164118528 size=16384 flags=1074267264
Nov 16 12:01:23 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488164118528 size=16384 flags=1074267264
Nov 16 12:01:23 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488164118528 size=16384 flags=1074267264
Nov 16 12:01:23 penny.dragonfear zed[19233]: eid=562 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 size=16384 offset=4488164118528 priority=3 err=5 flags=0x40080480
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488169078784 size=36864 flags=1074267264
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488169078784 size=49152 flags=1074267264
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488164130816 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488164130816 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488164130816 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488169078784 size=49152 flags=1074267264
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488169111552 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488169111552 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear zed[19239]: eid=566 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 size=4096 offset=4488164118528 priority=3 err=5 flags=0x380080 bookmark=0:285:0:0
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488169111552 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear zed[19241]: eid=563 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 size=4096 offset=4488164130816 priority=3 err=5 flags=0x380080 bookmark=0:975:0:0
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172371968 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172371968 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172371968 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172371968 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172371968 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172371968 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear zed[19245]: eid=571 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488164118528 priority=3 err=5 flags=0x380080 bookmark=0:285:0:0
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear zed[19248]: eid=567 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=16384 offset=4488164118528 priority=3 err=5 flags=0x40080480
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear zed[19255]: eid=564 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 size=4096 offset=4488164126720 priority=3 err=5 flags=0x380080 bookmark=0:971:1:0
Nov 16 12:01:24 penny.dragonfear zed[19258]: eid=565 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 size=4096 offset=4488164122624 priority=3 err=5 flags=0x380080 bookmark=0:971:0:1
Nov 16 12:01:24 penny.dragonfear zed[19261]: eid=573 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488169111552 priority=3 err=5 flags=0x380080 bookmark=0:60:0:0
Nov 16 12:01:24 penny.dragonfear zed[19265]: eid=575 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 size=4096 offset=4488169123840 priority=3 err=5 flags=0x380080 bookmark=0:48:1:0
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear zed[19266]: eid=572 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=36864 offset=4488169078784 priority=3 err=5 flags=0x40080480
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear zed[19268]: eid=574 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 size=49152 offset=4488169078784 priority=3 err=5 flags=0x40080480
Nov 16 12:01:24 penny.dragonfear zed[19269]: eid=568 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488164130816 priority=3 err=5 flags=0x380080 bookmark=0:975:0:0
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear zed[19252]: eid=570 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488164122624 priority=3 err=5 flags=0x380080 bookmark=0:971:0:1
Nov 16 12:01:24 penny.dragonfear zed[19271]: eid=576 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488169107456 priority=3 err=5 flags=0x380080 bookmark=0:56:0:1
Nov 16 12:01:24 penny.dragonfear zed[19275]: eid=578 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488169099264 priority=3 err=5 flags=0x380080 bookmark=0:48:0:1
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear zed[19281]: eid=579 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488169095168 priority=3 err=5 flags=0x380080 bookmark=0:28:0:0
Nov 16 12:01:24 penny.dragonfear zed[19282]: eid=580 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488169091072 priority=3 err=5 flags=0x380080 bookmark=0:1:0:1
Nov 16 12:01:24 penny.dragonfear zed[19278]: eid=569 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488164126720 priority=3 err=5 flags=0x380080 bookmark=0:971:1:0
Nov 16 12:01:24 penny.dragonfear zed[19286]: eid=581 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488169086976 priority=3 err=5 flags=0x380080 bookmark=0:0:0:220
Nov 16 12:01:24 penny.dragonfear zed[19287]: eid=577 class=io pool='chest' vdev=dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 size=4096 offset=4488169103360 priority=3 err=5 flags=0x380080 bookmark=0:52:0:0
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1605761
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-ad4ac4d72da84b6a866caeff621301f4-luks-ad4ac4d7-2da8-4b6a-866c-aeff621301f4 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-f0670720ae6440dab1618965f9e01718-luks-f0670720-ae64-40da-b161-8965f9e01718 error=5 type=2 offset=4488172384256 size=4096 flags=1572992
Nov 16 12:01:24 penny.dragonfear kernel: zio pool=chest vdev=/dev/disk/by-id/dm-uuid-CRYPT-LUKS2-01776eeb5259431f971aa6a12a9bd1fb-luks-01776eeb-5259-431f-971a-a6a12a9bd1fb error=5 type=2 offset=4488172384256 size=4096 flags=1572992
The media server experienced the failures first. After a day of running the same software on the backup machine (it's just a Borg backup server, no ZFS send or receive) that's when I decided it had to be software instead of hardware. Both machines have ECC memory.
I can confirm 2.2.0 as released in this repository does not have the data corruption issue.
2.2.0 or 2.2.1? That's a relief if 2.2.0, as that's what went into 14-RELEASE, right?
commit 95785196f26e92d82cf4445654ba84e4a9671c57 (tag: zfs-2.2.0, zfsonlinux/zfs-2.2-release)
Author: Brian Behlendorf <behlendorf1@llnl.gov>
Date: Thu Oct 12 16:14:14 2023 -0700
Tag 2.2.0
New Features
- Block cloning (#13392)
- Linux container support (#14070, #14097, #12263)
- Scrub error log (#12812, #12355)
- BLAKE3 checksums (#12918)
- Corrective "zfs receive"
- Vdev and zpool user properties
Performance
- Fully adaptive ARC (#14359)
- SHA2 checksums (#13741)
- Edon-R checksums (#13618)
- Zstd early abort (#13244)
- Prefetch improvements (#14603, #14516, #14402, #14243, #13452)
- General optimization (#14121, #14123, #14039, #13680, #13613,
#13606, #13576, #13553, #12789, #14925, #14948)
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
What I build and what I am running right now.
Tonight I'll try to upgrade my backup server to 2.2.1. Wish me luck.
I will report on the results after several days of testing.
I just ran into this after upgrading to zfs 2.2.1 (immediately after reboot). I'm also on Fedora and running zfs on top of LUKS. I'm seeing write errors, but not any checksum errors, and zpool scrub
gets interrupted almost immediately.
I'm pretty sure it's not a hardware issue. None of the drives are reporting SMART errors and each vdev is showing a similar number of errors per drive despite being connected via two different paths (1 drive via LSI HBA, 1 drive via native Intel SATA).
I'm going to try downgrading to zfs 2.2.0 to see if that helps. Unfortunately, I can't downgrade further than that because I've already enabled the new zpool features.
6.5.12-300.fc39.x86_64
zfs-2.2.1-1.fc39.x86_64
zpool status -v
dmesg
```
[ 0.000000] microcode: updated early: 0x113 -> 0x11d, date = 2023-08-29
[ 0.000000] Linux version 6.5.12-300.fc39.x86_64 (mockbuild@cda4963b6857459f9d1b40ea59f8a44a) (gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4), GNU ld version 2.40-13.fc39) #1 SMP PREEMPT_DYNAMIC Mon Nov 20 22:44:24 UTC 2023
[ 0.000000] Command line: root=UUID=e3af9a0d-aa4f-4d81-b315-97fa46206986 ro rd.luks.uuid=luks-cf6bc8bd-2dfb-4b60-b004-4a283c0d2d42 rhgb quiet console=tty0 console=ttyS1,115200n8 intel_iommu=on rd.shell=0 systemd.machine_id=ec7321adfdbb412093efcdc435abb26e
[ 0.000000] x86/split lock detection: #AC: crashing the kernel on kernel split_locks and warning on user-space split_locks
[ 0.000000] BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009dfff] usable
[ 0.000000] BIOS-e820: [mem 0x000000000009e000-0x000000000009efff] reserved
[ 0.000000] BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] usable
[ 0.000000] BIOS-e820: [mem 0x00000000000a0000-0x00000000000fffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000007177dfff] usable
[ 0.000000] BIOS-e820: [mem 0x000000007177e000-0x000000007487dfff] reserved
[ 0.000000] BIOS-e820: [mem 0x000000007487e000-0x000000007499cfff] ACPI data
[ 0.000000] BIOS-e820: [mem 0x000000007499d000-0x0000000074ac9fff] ACPI NVS
[ 0.000000] BIOS-e820: [mem 0x0000000074aca000-0x0000000075ffefff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000075fff000-0x0000000075ffffff] usable
[ 0.000000] BIOS-e820: [mem 0x0000000076000000-0x0000000079ffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x000000007a800000-0x000000007abfffff] reserved
[ 0.000000] BIOS-e820: [mem 0x000000007b000000-0x00000000803fffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000c0000000-0x00000000cfffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fed00000-0x00000000fed00fff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fed20000-0x00000000fed7ffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000107fbfffff] usable
[ 0.000000] NX (Execute Disable) protection: active
[ 0.000000] e820: update [mem 0x66bb6018-0x66bfbe57] usable ==> usable
[ 0.000000] e820: update [mem 0x66bb6018-0x66bfbe57] usable ==> usable
[ 0.000000] e820: update [mem 0x66b91018-0x66bb5657] usable ==> usable
[ 0.000000] e820: update [mem 0x66b91018-0x66bb5657] usable ==> usable
[ 0.000000] e820: update [mem 0x63b68018-0x63b8c657] usable ==> usable
[ 0.000000] e820: update [mem 0x63b68018-0x63b8c657] usable ==> usable
[ 0.000000] e820: update [mem 0x63b36018-0x63b67657] usable ==> usable
[ 0.000000] e820: update [mem 0x63b36018-0x63b67657] usable ==> usable
[ 0.000000] e820: update [mem 0x66b83018-0x66b90657] usable ==> usable
[ 0.000000] e820: update [mem 0x66b83018-0x66b90657] usable ==> usable
[ 0.000000] extended physical RAM map:
[ 0.000000] reserve setup_data: [mem 0x0000000000000000-0x000000000009dfff] usable
[ 0.000000] reserve setup_data: [mem 0x000000000009e000-0x000000000009efff] reserved
[ 0.000000] reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] usable
[ 0.000000] reserve setup_data: [mem 0x00000000000a0000-0x00000000000fffff] reserved
[ 0.000000] reserve setup_data: [mem 0x0000000000100000-0x0000000063b36017] usable
[ 0.000000] reserve setup_data: [mem 0x0000000063b36018-0x0000000063b67657] usable
[ 0.000000] reserve setup_data: [mem 0x0000000063b67658-0x0000000063b68017] usable
[ 0.000000] reserve setup_data: [mem 0x0000000063b68018-0x0000000063b8c657] usable
[ 0.000000] reserve setup_data: [mem 0x0000000063b8c658-0x0000000066b83017] usable
[ 0.000000] reserve setup_data: [mem 0x0000000066b83018-0x0000000066b90657] usable
[ 0.000000] reserve setup_data: [mem 0x0000000066b90658-0x0000000066b91017] usable
[ 0.000000] reserve setup_data: [mem 0x0000000066b91018-0x0000000066bb5657] usable
[ 0.000000] reserve setup_data: [mem 0x0000000066bb5658-0x0000000066bb6017] usable
[ 0.000000] reserve setup_data: [mem 0x0000000066bb6018-0x0000000066bfbe57] usable
[ 0.000000] reserve setup_data: [mem 0x0000000066bfbe58-0x000000007177dfff] usable
[ 0.000000] reserve setup_data: [mem 0x000000007177e000-0x000000007487dfff] reserved
[ 0.000000] reserve setup_data: [mem 0x000000007487e000-0x000000007499cfff] ACPI data
[ 0.000000] reserve setup_data: [mem 0x000000007499d000-0x0000000074ac9fff] ACPI NVS
[ 0.000000] reserve setup_data: [mem 0x0000000074aca000-0x0000000075ffefff] reserved
[ 0.000000] reserve setup_data: [mem 0x0000000075fff000-0x0000000075ffffff] usable
[ 0.000000] reserve setup_data: [mem 0x0000000076000000-0x0000000079ffffff] reserved
[ 0.000000] reserve setup_data: [mem 0x000000007a800000-0x000000007abfffff] reserved
[ 0.000000] reserve setup_data: [mem 0x000000007b000000-0x00000000803fffff] reserved
[ 0.000000] reserve setup_data: [mem 0x00000000c0000000-0x00000000cfffffff] reserved
[ 0.000000] reserve setup_data: [mem 0x00000000fe000000-0x00000000fe010fff] reserved
[ 0.000000] reserve setup_data: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[ 0.000000] reserve setup_data: [mem 0x00000000fed00000-0x00000000fed00fff] reserved
[ 0.000000] reserve setup_data: [mem 0x00000000fed20000-0x00000000fed7ffff] reserved
[ 0.000000] reserve setup_data: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
[ 0.000000] reserve setup_data: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[ 0.000000] reserve setup_data: [mem 0x0000000100000000-0x000000107fbfffff] usable
[ 0.000000] efi: EFI v2.8 by American Megatrends
[ 0.000000] efi: ACPI=0x74a26000 ACPI 2.0=0x74a26014 TPMFinalLog=0x749f5000 SMBIOS=0x75bbc000 SMBIOS 3.0=0x75bbb000 MEMATTR=0x692e7018 RNG=0x748b6f18 INITRD=0x692e1d98 TPMEventLog=0x66bfc018
[ 0.000000] random: crng init done
[ 0.000000] efi: Remove mem286: MMIO range=[0xc0000000-0xcfffffff] (256MB) from e820 map
[ 0.000000] e820: remove [mem 0xc0000000-0xcfffffff] reserved
[ 0.000000] efi: Not removing mem287: MMIO range=[0xfe000000-0xfe010fff] (68KB) from e820 map
[ 0.000000] efi: Not removing mem288: MMIO range=[0xfec00000-0xfec00fff] (4KB) from e820 map
[ 0.000000] efi: Not removing mem289: MMIO range=[0xfed00000-0xfed00fff] (4KB) from e820 map
[ 0.000000] efi: Not removing mem291: MMIO range=[0xfee00000-0xfee00fff] (4KB) from e820 map
[ 0.000000] efi: Remove mem292: MMIO range=[0xff000000-0xffffffff] (16MB) from e820 map
[ 0.000000] e820: remove [mem 0xff000000-0xffffffff] reserved
[ 0.000000] secureboot: Secure boot enabled
[ 0.000000] Kernel is locked down from EFI Secure Boot mode; see man kernel_lockdown.7
[ 0.000000] SMBIOS 3.5.0 present.
[ 0.000000] DMI: Supermicro Super Server/X13SAE-F, BIOS 2.1 04/06/2023
[ 0.000000] tsc: Detected 3500.000 MHz processor
[ 0.000000] tsc: Detected 3494.400 MHz TSC
[ 0.001513] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[ 0.001515] e820: remove [mem 0x000a0000-0x000fffff] usable
[ 0.001525] last_pfn = 0x107fc00 max_arch_pfn = 0x400000000
[ 0.001529] total RAM covered: 128960M
[ 0.001612] Found optimal setting for mtrr clean up
[ 0.001613] gran_size: 64K chunk_size: 128M num_reg: 7 lose cover RAM: 0G
[ 0.001616] MTRR map: 6 entries (3 fixed + 3 variable; max 23), built from 10 variable MTRRs
[ 0.001618] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT
[ 0.002157] e820: update [mem 0x7c000000-0xffffffff] usable ==> reserved
[ 0.002160] last_pfn = 0x76000 max_arch_pfn = 0x400000000
[ 0.018585] Using GB pages for direct mapping
[ 0.018586] Incomplete global flushes, disabling PCID
[ 0.018768] secureboot: Secure boot enabled
[ 0.018769] RAMDISK: [mem 0x5fdc8000-0x6273dfff]
[ 0.018773] ACPI: Early table checksum verification disabled
[ 0.018776] ACPI: RSDP 0x0000000074A26014 000024 (v02 SUPERM)
[ 0.018780] ACPI: XSDT 0x0000000074A25728 000144 (v01 SUPERM SMCI--MB 01072009 AMI 01000013)
[ 0.018785] ACPI: FACP 0x000000007499A000 000114 (v06 SUPERM SMCI--MB 01072009 AMI 01000013)
[ 0.018789] ACPI: DSDT 0x00000000748FB000 09CE12 (v02 SUPERM SMCI--MB 01072009 INTL 20200717)
[ 0.018791] ACPI: FACS 0x0000000074AC8000 000040
[ 0.018793] ACPI: SPMI 0x0000000074999000 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000)
[ 0.018796] ACPI: SPMI 0x0000000074998000 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000)
[ 0.018798] ACPI: FIDT 0x00000000748FA000 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013)
[ 0.018800] ACPI: SSDT 0x000000007499C000 00038C (v02 PmaxDv Pmax_Dev 00000001 INTL 20200717)
[ 0.018803] ACPI: SSDT 0x00000000748F4000 005C55 (v02 CpuRef CpuSsdt 00003000 INTL 20200717)
[ 0.018805] ACPI: SSDT 0x00000000748F1000 002B7D (v02 SaSsdt SaSsdt 00003000 INTL 20200717)
[ 0.018807] ACPI: SSDT 0x00000000748ED000 003359 (v02 INTEL IgfxSsdt 00003000 INTL 20200717)
[ 0.018810] ACPI: HPET 0x000000007499B000 000038 (v01 SUPERM SMCI--MB 01072009 AMI 01000013)
[ 0.018812] ACPI: APIC 0x00000000748EC000 0001DC (v05 SUPERM SMCI--MB 01072009 AMI 01000013)
[ 0.018814] ACPI: MCFG 0x00000000748EB000 00003C (v01 SUPERM SMCI--MB 01072009 AMI 01000013)
[ 0.018816] ACPI: SSDT 0x00000000748E1000 009350 (v02 SUPERM AdlS_Rvp 00001000 INTL 20200717)
[ 0.018818] ACPI: SSDT 0x00000000748DF000 001F1A (v02 SUPERM Ther_Rvp 00001000 INTL 20200717)
[ 0.018820] ACPI: UEFI 0x00000000749DC000 000048 (v01 SUPERM SMCI--MB 01072009 AMI 01000013)
[ 0.018823] ACPI: NHLT 0x00000000748DE000 00002D (v00 SUPERM SMCI--MB 01072009 AMI 01000013)
[ 0.018825] ACPI: LPIT 0x00000000748DD000 0000CC (v01 SUPERM SMCI--MB 01072009 AMI 01000013)
[ 0.018827] ACPI: SSDT 0x00000000748D9000 002A83 (v02 SUPERM PtidDevc 00001000 INTL 20200717)
[ 0.018829] ACPI: SSDT 0x00000000748D0000 008F27 (v02 SUPERM TbtTypeC 00000000 INTL 20200717)
[ 0.018831] ACPI: DBGP 0x00000000748CF000 000034 (v01 SUPERM SMCI--MB 01072009 AMI 01000013)
[ 0.018833] ACPI: DBG2 0x00000000748CE000 000054 (v00 SUPERM SMCI--MB 01072009 AMI 01000013)
[ 0.018836] ACPI: SSDT 0x00000000748CC000 00190A (v02 SUPERM UsbCTabl 00001000 INTL 20200717)
[ 0.018838] ACPI: DMAR 0x00000000748CB000 000088 (v02 INTEL EDK2 00000002 01000013)
[ 0.018841] ACPI: FPDT 0x00000000748CA000 000044 (v01 SUPERM A M I 01072009 AMI 01000013)
[ 0.018843] ACPI: SSDT 0x00000000748C8000 0012DA (v02 INTEL xh_adls3 00000000 INTL 20200717)
[ 0.018845] ACPI: SSDT 0x00000000748C4000 003AEA (v02 SocGpe SocGpe 00003000 INTL 20200717)
[ 0.018847] ACPI: SSDT 0x00000000748C0000 0039DA (v02 SocCmn SocCmn 00003000 INTL 20200717)
[ 0.018849] ACPI: SSDT 0x00000000748BF000 000144 (v02 Intel ADebTabl 00001000 INTL 20200717)
[ 0.018850] ACPI: BGRT 0x00000000748BE000 000038 (v01 SUPERM SMCI--MB 01072009 AMI 00010013)
[ 0.018852] ACPI: TPM2 0x00000000748BD000 00004C (v04 SUPERM SMCI--MB 00000001 AMI 00000000)
[ 0.018854] ACPI: PHAT 0x00000000748BB000 0005F1 (v01 SUPERM SMCI--MB 00000005 MSFT 0100000D)
[ 0.018856] ACPI: ASF! 0x00000000748BC000 000074 (v32 SUPERM SMCI--MB 01072009 AMI 01000013)
[ 0.018858] ACPI: WSMT 0x00000000748DC000 000028 (v01 SUPERM SMCI--MB 01072009 AMI 00010013)
[ 0.018860] ACPI: EINJ 0x00000000748BA000 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000)
[ 0.018862] ACPI: ERST 0x00000000748B9000 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000)
[ 0.018864] ACPI: BERT 0x00000000748B8000 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000)
[ 0.018866] ACPI: HEST 0x00000000748B7000 0000A8 (v01 AMI AMI.HEST 00000000 AMI. 00000000)
[ 0.018867] ACPI: Reserving FACP table memory at [mem 0x7499a000-0x7499a113]
[ 0.018868] ACPI: Reserving DSDT table memory at [mem 0x748fb000-0x74997e11]
[ 0.018869] ACPI: Reserving FACS table memory at [mem 0x74ac8000-0x74ac803f]
[ 0.018869] ACPI: Reserving SPMI table memory at [mem 0x74999000-0x74999040]
[ 0.018870] ACPI: Reserving SPMI table memory at [mem 0x74998000-0x74998040]
[ 0.018871] ACPI: Reserving FIDT table memory at [mem 0x748fa000-0x748fa09b]
[ 0.018872] ACPI: Reserving SSDT table memory at [mem 0x7499c000-0x7499c38b]
[ 0.018872] ACPI: Reserving SSDT table memory at [mem 0x748f4000-0x748f9c54]
[ 0.018873] ACPI: Reserving SSDT table memory at [mem 0x748f1000-0x748f3b7c]
[ 0.018874] ACPI: Reserving SSDT table memory at [mem 0x748ed000-0x748f0358]
[ 0.018874] ACPI: Reserving HPET table memory at [mem 0x7499b000-0x7499b037]
[ 0.018875] ACPI: Reserving APIC table memory at [mem 0x748ec000-0x748ec1db]
[ 0.018876] ACPI: Reserving MCFG table memory at [mem 0x748eb000-0x748eb03b]
[ 0.018876] ACPI: Reserving SSDT table memory at [mem 0x748e1000-0x748ea34f]
[ 0.018877] ACPI: Reserving SSDT table memory at [mem 0x748df000-0x748e0f19]
[ 0.018877] ACPI: Reserving UEFI table memory at [mem 0x749dc000-0x749dc047]
[ 0.018878] ACPI: Reserving NHLT table memory at [mem 0x748de000-0x748de02c]
[ 0.018879] ACPI: Reserving LPIT table memory at [mem 0x748dd000-0x748dd0cb]
[ 0.018879] ACPI: Reserving SSDT table memory at [mem 0x748d9000-0x748dba82]
[ 0.018880] ACPI: Reserving SSDT table memory at [mem 0x748d0000-0x748d8f26]
[ 0.018881] ACPI: Reserving DBGP table memory at [mem 0x748cf000-0x748cf033]
[ 0.018881] ACPI: Reserving DBG2 table memory at [mem 0x748ce000-0x748ce053]
[ 0.018882] ACPI: Reserving SSDT table memory at [mem 0x748cc000-0x748cd909]
[ 0.018883] ACPI: Reserving DMAR table memory at [mem 0x748cb000-0x748cb087]
[ 0.018883] ACPI: Reserving FPDT table memory at [mem 0x748ca000-0x748ca043]
[ 0.018884] ACPI: Reserving SSDT table memory at [mem 0x748c8000-0x748c92d9]
[ 0.018885] ACPI: Reserving SSDT table memory at [mem 0x748c4000-0x748c7ae9]
[ 0.018885] ACPI: Reserving SSDT table memory at [mem 0x748c0000-0x748c39d9]
[ 0.018886] ACPI: Reserving SSDT table memory at [mem 0x748bf000-0x748bf143]
[ 0.018887] ACPI: Reserving BGRT table memory at [mem 0x748be000-0x748be037]
[ 0.018887] ACPI: Reserving TPM2 table memory at [mem 0x748bd000-0x748bd04b]
[ 0.018888] ACPI: Reserving PHAT table memory at [mem 0x748bb000-0x748bb5f0]
[ 0.018889] ACPI: Reserving ASF! table memory at [mem 0x748bc000-0x748bc073]
[ 0.018889] ACPI: Reserving WSMT table memory at [mem 0x748dc000-0x748dc027]
[ 0.018890] ACPI: Reserving EINJ table memory at [mem 0x748ba000-0x748ba12f]
[ 0.018891] ACPI: Reserving ERST table memory at [mem 0x748b9000-0x748b922f]
[ 0.018891] ACPI: Reserving BERT table memory at [mem 0x748b8000-0x748b802f]
[ 0.018892] ACPI: Reserving HEST table memory at [mem 0x748b7000-0x748b70a7]
[ 0.019056] No NUMA configuration found
[ 0.019056] Faking a node at [mem 0x0000000000000000-0x000000107fbfffff]
[ 0.019062] NODE_DATA(0) allocated [mem 0x107fbd5000-0x107fbfffff]
[ 0.019226] Zone ranges:
[ 0.019226] DMA [mem 0x0000000000001000-0x0000000000ffffff]
[ 0.019228] DMA32 [mem 0x0000000001000000-0x00000000ffffffff]
[ 0.019229] Normal [mem 0x0000000100000000-0x000000107fbfffff]
[ 0.019230] Device empty
[ 0.019231] Movable zone start for each node
[ 0.019232] Early memory node ranges
[ 0.019232] node 0: [mem 0x0000000000001000-0x000000000009dfff]
[ 0.019233] node 0: [mem 0x000000000009f000-0x000000000009ffff]
[ 0.019234] node 0: [mem 0x0000000000100000-0x000000007177dfff]
[ 0.019234] node 0: [mem 0x0000000075fff000-0x0000000075ffffff]
[ 0.019235] node 0: [mem 0x0000000100000000-0x000000107fbfffff]
[ 0.019238] Initmem setup node 0 [mem 0x0000000000001000-0x000000107fbfffff]
[ 0.019242] On node 0, zone DMA: 1 pages in unavailable ranges
[ 0.019243] On node 0, zone DMA: 1 pages in unavailable ranges
[ 0.019261] On node 0, zone DMA: 96 pages in unavailable ranges
[ 0.021359] On node 0, zone DMA32: 18561 pages in unavailable ranges
[ 0.095422] On node 0, zone Normal: 8192 pages in unavailable ranges
[ 0.095428] On node 0, zone Normal: 1024 pages in unavailable ranges
[ 0.096453] ACPI: PM-Timer IO Port: 0x1808
[ 0.096460] ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1])
[ 0.096462] ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1])
[ 0.096463] ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1])
[ 0.096463] ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1])
[ 0.096464] ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1])
[ 0.096464] ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1])
[ 0.096464] ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1])
[ 0.096465] ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1])
[ 0.096465] ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1])
[ 0.096466] ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1])
[ 0.096466] ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1])
[ 0.096467] ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1])
[ 0.096467] ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1])
[ 0.096468] ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1])
[ 0.096468] ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1])
[ 0.096469] ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1])
[ 0.096469] ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1])
[ 0.096470] ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1])
[ 0.096470] ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1])
[ 0.096470] ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1])
[ 0.096471] ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1])
[ 0.096471] ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1])
[ 0.096472] ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1])
[ 0.096472] ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1])
[ 0.096508] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119
[ 0.096510] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[ 0.096512] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[ 0.096515] ACPI: Using ACPI (MADT) for SMP configuration information
[ 0.096516] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[ 0.096527] e820: update [mem 0x6867d000-0x688bdfff] usable ==> reserved
[ 0.096540] TSC deadline timer available
[ 0.096541] smpboot: Allowing 20 CPUs, 0 hotplug CPUs
[ 0.096558] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
[ 0.096560] PM: hibernation: Registered nosave memory: [mem 0x0009e000-0x0009efff]
[ 0.096561] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000fffff]
[ 0.096562] PM: hibernation: Registered nosave memory: [mem 0x63b36000-0x63b36fff]
[ 0.096563] PM: hibernation: Registered nosave memory: [mem 0x63b67000-0x63b67fff]
[ 0.096564] PM: hibernation: Registered nosave memory: [mem 0x63b68000-0x63b68fff]
[ 0.096565] PM: hibernation: Registered nosave memory: [mem 0x63b8c000-0x63b8cfff]
[ 0.096566] PM: hibernation: Registered nosave memory: [mem 0x66b83000-0x66b83fff]
[ 0.096567] PM: hibernation: Registered nosave memory: [mem 0x66b90000-0x66b90fff]
[ 0.096567] PM: hibernation: Registered nosave memory: [mem 0x66b91000-0x66b91fff]
[ 0.096569] PM: hibernation: Registered nosave memory: [mem 0x66bb5000-0x66bb5fff]
[ 0.096569] PM: hibernation: Registered nosave memory: [mem 0x66bb6000-0x66bb6fff]
[ 0.096570] PM: hibernation: Registered nosave memory: [mem 0x66bfb000-0x66bfbfff]
[ 0.096571] PM: hibernation: Registered nosave memory: [mem 0x6867d000-0x688bdfff]
[ 0.096572] PM: hibernation: Registered nosave memory: [mem 0x7177e000-0x7487dfff]
[ 0.096573] PM: hibernation: Registered nosave memory: [mem 0x7487e000-0x7499cfff]
[ 0.096573] PM: hibernation: Registered nosave memory: [mem 0x7499d000-0x74ac9fff]
[ 0.096574] PM: hibernation: Registered nosave memory: [mem 0x74aca000-0x75ffefff]
[ 0.096575] PM: hibernation: Registered nosave memory: [mem 0x76000000-0x79ffffff]
[ 0.096575] PM: hibernation: Registered nosave memory: [mem 0x7a000000-0x7a7fffff]
[ 0.096576] PM: hibernation: Registered nosave memory: [mem 0x7a800000-0x7abfffff]
[ 0.096576] PM: hibernation: Registered nosave memory: [mem 0x7ac00000-0x7affffff]
[ 0.096577] PM: hibernation: Registered nosave memory: [mem 0x7b000000-0x803fffff]
[ 0.096577] PM: hibernation: Registered nosave memory: [mem 0x80400000-0xfdffffff]
[ 0.096577] PM: hibernation: Registered nosave memory: [mem 0xfe000000-0xfe010fff]
[ 0.096578] PM: hibernation: Registered nosave memory: [mem 0xfe011000-0xfebfffff]
[ 0.096578] PM: hibernation: Registered nosave memory: [mem 0xfec00000-0xfec00fff]
[ 0.096579] PM: hibernation: Registered nosave memory: [mem 0xfec01000-0xfecfffff]
[ 0.096579] PM: hibernation: Registered nosave memory: [mem 0xfed00000-0xfed00fff]
[ 0.096580] PM: hibernation: Registered nosave memory: [mem 0xfed01000-0xfed1ffff]
[ 0.096580] PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfed7ffff]
[ 0.096580] PM: hibernation: Registered nosave memory: [mem 0xfed80000-0xfedfffff]
[ 0.096581] PM: hibernation: Registered nosave memory: [mem 0xfee00000-0xfee00fff]
[ 0.096581] PM: hibernation: Registered nosave memory: [mem 0xfee01000-0xffffffff]
[ 0.096583] [mem 0x80400000-0xfdffffff] available for PCI devices
[ 0.096584] Booting paravirtualized kernel on bare hardware
[ 0.096586] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
[ 0.102045] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:20 nr_cpu_ids:20 nr_node_ids:1
[ 0.102722] percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
[ 0.102727] pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
[ 0.102728] pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15
[ 0.102735] pcpu-alloc: [0] 16 17 18 19 -- -- -- --
[ 0.102753] Kernel command line: root=UUID=e3af9a0d-aa4f-4d81-b315-97fa46206986 ro rd.luks.uuid=luks-cf6bc8bd-2dfb-4b60-b004-4a283c0d2d42 rhgb quiet console=tty0 console=ttyS1,115200n8 intel_iommu=on rd.shell=0 systemd.machine_id=ec7321adfdbb412093efcdc435abb26e
[ 0.102812] DMAR: IOMMU enabled
[ 0.102832] Unknown kernel command line parameters "rhgb", will be passed to user space.
[ 0.107630] Dentry cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear)
[ 0.110042] Inode-cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear)
[ 0.110223] Fallback order for Node 0: 0
[ 0.110226] Built 1 zonelists, mobility grouping on. Total pages: 16455217
[ 0.110227] Policy zone: Normal
[ 0.110396] mem auto-init: stack:all(zero), heap alloc:off, heap free:off
[ 0.110403] software IO TLB: area num 32.
[ 0.209816] Memory: 65361156K/66866292K available (18432K kernel code, 3267K rwdata, 14476K rodata, 4532K init, 17364K bss, 1504876K reserved, 0K cma-reserved)
[ 0.210017] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=20, Nodes=1
[ 0.210037] ftrace: allocating 53614 entries in 210 pages
[ 0.216433] ftrace: allocated 210 pages with 4 groups
[ 0.217060] Dynamic Preempt: voluntary
[ 0.217103] rcu: Preemptible hierarchical RCU implementation.
[ 0.217103] rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=20.
[ 0.217104] Trampoline variant of Tasks RCU enabled.
[ 0.217105] Rude variant of Tasks RCU enabled.
[ 0.217105] Tracing variant of Tasks RCU enabled.
[ 0.217106] rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
[ 0.217107] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=20
[ 0.219095] NR_IRQS: 524544, nr_irqs: 2216, preallocated irqs: 16
[ 0.219394] rcu: srcu_init: Setting srcu_struct sizes based on contention.
[ 0.219650] kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
[ 0.219682] Console: colour dummy device 80x25
[ 0.219684] printk: console [tty0] enabled
[ 0.219734] printk: console [ttyS1] enabled
[ 0.219774] ACPI: Core revision 20230331
[ 0.220123] hpet: HPET dysfunctional in PC10. Force disabled.
[ 0.220124] APIC: Switch to symmetric I/O mode setup
[ 0.220126] DMAR: Host address width 46
[ 0.220127] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[ 0.220133] DMAR: dmar0: reg_base_addr fed90000 ver 4:0 cap 1c0000c40660462 ecap 29a00f0505e
[ 0.220135] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[ 0.220139] DMAR: dmar1: reg_base_addr fed91000 ver 5:0 cap d2008c40660462 ecap f050da
[ 0.220141] DMAR: RMRR base: 0x0000007c000000 end: 0x000000803fffff
[ 0.220143] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
[ 0.220144] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[ 0.220145] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.221720] DMAR-IR: Enabled IRQ remapping in x2apic mode
[ 0.221722] x2apic enabled
[ 0.221741] Switched APIC routing to cluster x2apic.
[ 0.226137] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x325ea749ca1, max_idle_ns: 440795373125 ns
[ 0.226143] Calibrating delay loop (skipped), value calculated using timer frequency.. 6988.80 BogoMIPS (lpj=3494400)
[ 0.226191] x86/tme: enabled by BIOS
[ 0.226192] x86/tme: Unknown policy is active: 0x2
[ 0.226193] x86/mktme: No known encryption algorithm is supported: 0x4
[ 0.226194] x86/mktme: enabled by BIOS
[ 0.226194] x86/mktme: 15 KeyIDs available
[ 0.226201] CPU0: Thermal monitoring enabled (TM1)
[ 0.226203] x86/cpu: User Mode Instruction Prevention (UMIP) activated
[ 0.226329] process: using mwait in idle threads
[ 0.226331] CET detected: Indirect Branch Tracking enabled
[ 0.226332] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
[ 0.226333] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
[ 0.226335] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[ 0.226338] Spectre V2 : Mitigation: Enhanced / Automatic IBRS
[ 0.226338] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[ 0.226339] Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT
[ 0.226340] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
[ 0.226342] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
[ 0.226350] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[ 0.226351] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[ 0.226352] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[ 0.226352] x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
[ 0.226353] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256
[ 0.226354] x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8
[ 0.226355] x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
[ 0.227141] Freeing SMP alternatives memory: 48K
[ 0.227141] pid_max: default: 32768 minimum: 301
[ 0.227141] LSM: initializing lsm=lockdown,capability,yama,selinux,bpf,landlock,integrity
[ 0.227141] Yama: becoming mindful.
[ 0.227141] SELinux: Initializing.
[ 0.227141] LSM support for eBPF active
[ 0.227141] landlock: Up and running.
[ 0.227141] Mount-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[ 0.227141] Mountpoint-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
[ 0.227141] smpboot: CPU0: 13th Gen Intel(R) Core(TM) i5-13600K (family: 0x6, model: 0xb7, stepping: 0x1)
[ 0.227141] RCU Tasks: Setting shift to 5 and lim to 1 rcu_task_cb_adjust=1.
[ 0.227141] RCU Tasks Rude: Setting shift to 5 and lim to 1 rcu_task_cb_adjust=1.
[ 0.227141] RCU Tasks Trace: Setting shift to 5 and lim to 1 rcu_task_cb_adjust=1.
[ 0.227141] Performance Events: XSAVE Architectural LBR, PEBS fmt4+-baseline, AnyThread deprecated, Alderlake Hybrid events, 32-deep LBR, full-width counters, Intel PMU driver.
[ 0.227141] core: cpu_core PMU driver:
[ 0.227141] ... version: 5
[ 0.227141] ... bit width: 48
[ 0.227141] ... generic registers: 8
[ 0.227141] ... value mask: 0000ffffffffffff
[ 0.227141] ... max period: 00007fffffffffff
[ 0.227141] ... fixed-purpose events: 4
[ 0.227141] ... event mask: 0001000f000000ff
[ 0.227141] signal: max sigframe size: 3632
[ 0.227141] Estimated ratio of average max frequency by base frequency (times 1024): 1492
[ 0.227141] rcu: Hierarchical SRCU implementation.
[ 0.227141] rcu: Max phase no-delay instances is 400.
[ 0.227220] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
[ 0.227334] smp: Bringing up secondary CPUs ...
[ 0.227393] smpboot: x86: Booting SMP configuration:
[ 0.227394] .... node #0, CPUs: #2 #4 #6 #8 #10 #12 #13 #14 #15 #16 #17 #18 #19
[ 0.007566] core: cpu_atom PMU driver: PEBS-via-PT
[ 0.007566] ... version: 5
[ 0.007566] ... bit width: 48
[ 0.007566] ... generic registers: 6
[ 0.007566] ... value mask: 0000ffffffffffff
[ 0.007566] ... max period: 00007fffffffffff
[ 0.007566] ... fixed-purpose events: 3
[ 0.007566] ... event mask: 000000070000003f
[ 0.239215] #1 #3 #5 #7 #9 #11
[ 0.243173] smp: Brought up 1 node, 20 CPUs
[ 0.243173] smpboot: Max logical packages: 1
[ 0.243173] smpboot: Total of 20 processors activated (139776.00 BogoMIPS)
[ 0.245697] devtmpfs: initialized
[ 0.245697] x86/mm: Memory block size: 2048MB
[ 0.245697] ACPI: PM: Registering ACPI NVS region [mem 0x7499d000-0x74ac9fff] (1232896 bytes)
[ 0.245697] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
[ 0.245697] futex hash table entries: 8192 (order: 7, 524288 bytes, linear)
[ 0.246202] pinctrl core: initialized pinctrl subsystem
[ 0.246417] PM: RTC time: 16:01:02, date: 2023-11-22
[ 0.246694] NET: Registered PF_NETLINK/PF_ROUTE protocol family
[ 0.246795] DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations
[ 0.246798] DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
[ 0.246800] DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
[ 0.246809] audit: initializing netlink subsys (disabled)
[ 0.246813] audit: type=2000 audit(1700668862.020:1): state=initialized audit_enabled=0 res=1
[ 0.246813] thermal_sys: Registered thermal governor 'fair_share'
[ 0.246813] thermal_sys: Registered thermal governor 'bang_bang'
[ 0.246813] thermal_sys: Registered thermal governor 'step_wise'
[ 0.246813] thermal_sys: Registered thermal governor 'user_space'
[ 0.246813] cpuidle: using governor menu
[ 0.246813] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[ 0.247155] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xc0000000-0xcfffffff] (base 0xc0000000)
[ 0.247160] PCI: not using MMCONFIG
[ 0.247161] PCI: Using configuration type 1 for base access
[ 0.247434] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[ 0.247435] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
[ 0.247435] HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
[ 0.247435] HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
[ 0.247435] HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
[ 0.247435] HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
[ 0.247435] cryptd: max_cpu_qlen set to 1000
[ 0.247435] raid6: skipped pq benchmark and selected avx2x4
[ 0.247435] raid6: using avx2x2 recovery algorithm
[ 0.247435] ACPI: Added _OSI(Module Device)
[ 0.247435] ACPI: Added _OSI(Processor Device)
[ 0.247435] ACPI: Added _OSI(3.0 _SCP Extensions)
[ 0.247435] ACPI: Added _OSI(Processor Aggregator Device)
[ 0.324645] ACPI: 14 ACPI AML tables successfully acquired and loaded
[ 0.335625] ACPI: Dynamic OEM Table Load:
[ 0.335634] ACPI: SSDT 0xFFFF92CAC2609A00 0001AB (v02 PmRef Cpu0Psd 00003000 INTL 20200717)
[ 0.336269] ACPI: \_SB_.PR00: _OSC native thermal LVT Acked
[ 0.339825] ACPI: Dynamic OEM Table Load:
[ 0.339831] ACPI: SSDT 0xFFFF92CAC16A8800 000394 (v02 PmRef Cpu0Cst 00003001 INTL 20200717)
[ 0.340598] ACPI: Dynamic OEM Table Load:
[ 0.340604] ACPI: SSDT 0xFFFF92CAC2687000 0006AA (v02 PmRef Cpu0Ist 00003000 INTL 20200717)
[ 0.341409] ACPI: Dynamic OEM Table Load:
[ 0.341414] ACPI: SSDT 0xFFFF92CAC2686800 0004B5 (v02 PmRef Cpu0Hwp 00003000 INTL 20200717)
[ 0.342358] ACPI: Dynamic OEM Table Load:
[ 0.342365] ACPI: SSDT 0xFFFF92CAC16A2000 001BAF (v02 PmRef ApIst 00003000 INTL 20200717)
[ 0.343472] ACPI: Dynamic OEM Table Load:
[ 0.343478] ACPI: SSDT 0xFFFF92CAC16A0000 001038 (v02 PmRef ApHwp 00003000 INTL 20200717)
[ 0.344468] ACPI: Dynamic OEM Table Load:
[ 0.344474] ACPI: SSDT 0xFFFF92CAC268C000 001349 (v02 PmRef ApPsd 00003000 INTL 20200717)
[ 0.345510] ACPI: Dynamic OEM Table Load:
[ 0.345516] ACPI: SSDT 0xFFFF92CAC16B7000 000FBB (v02 PmRef ApCst 00003000 INTL 20200717)
[ 0.352883] ACPI: Interpreter enabled
[ 0.352931] ACPI: PM: (supports S0 S3 S4 S5)
[ 0.352932] ACPI: Using IOAPIC for interrupt routing
[ 0.354040] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xc0000000-0xcfffffff] (base 0xc0000000)
[ 0.355303] PCI: MMCONFIG at [mem 0xc0000000-0xcfffffff] reserved as ACPI motherboard resource
[ 0.355324] HEST: Table parsing has been initialized.
[ 0.355618] GHES: APEI firmware first mode is enabled by APEI bit.
[ 0.355620] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[ 0.355621] PCI: Ignoring E820 reservations for host bridge windows
[ 0.356476] ACPI: Enabled 6 GPEs in block 00 to 7F
[ 0.357181] ACPI: \_SB_.PC00.PEG1.PXP_: New power resource
[ 0.358050] ACPI: \_SB_.PC00.PEG0.PXP_: New power resource
[ 0.360217] ACPI: \_SB_.PC00.RP09.PXP_: New power resource
[ 0.361773] ACPI: \_SB_.PC00.RP13.PXP_: New power resource
[ 0.363423] ACPI: \_SB_.PC00.RP01.PXP_: New power resource
[ 0.365162] ACPI: \_SB_.PC00.RP05.PXP_: New power resource
[ 0.368419] ACPI: \_SB_.PC00.RP21.PXP_: New power resource
[ 0.370015] ACPI: \_SB_.PC00.RP25.PXP_: New power resource
[ 0.374005] ACPI: \_SB_.PC00.PAUD: New power resource
[ 0.376601] ACPI: \_SB_.PC00.I2C1.PXTC: New power resource
[ 0.380040] ACPI: \_SB_.PC00.CNVW.WRST: New power resource
[ 0.380174] ACPI: \SPR4: New power resource
[ 0.380282] ACPI: \SPR5: New power resource
[ 0.380389] ACPI: \SPR6: New power resource
[ 0.380494] ACPI: \SPR7: New power resource
[ 0.385000] ACPI: \_TZ_.FN00: New power resource
[ 0.385032] ACPI: \_TZ_.FN01: New power resource
[ 0.385063] ACPI: \_TZ_.FN02: New power resource
[ 0.385093] ACPI: \_TZ_.FN03: New power resource
[ 0.385122] ACPI: \_TZ_.FN04: New power resource
[ 0.385538] ACPI: \PIN_: New power resource
[ 0.385722] ACPI: PCI Root Bridge [PC00] (domain 0000 [bus 00-fe])
[ 0.385726] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
[ 0.386971] acpi PNP0A08:00: _OSC: platform does not support [AER]
[ 0.389470] acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug SHPCHotplug PME PCIeCapability LTR DPC]
[ 0.390683] PCI host bridge to bus 0000:00
[ 0.390684] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window]
[ 0.390686] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window]
[ 0.390687] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
[ 0.390688] pci_bus 0000:00: root bus resource [mem 0x000e0000-0x000fffff window]
[ 0.390689] pci_bus 0000:00: root bus resource [mem 0x80400000-0xbfffffff window]
[ 0.390690] pci_bus 0000:00: root bus resource [mem 0x4000000000-0x7fffffffff window]
[ 0.390691] pci_bus 0000:00: root bus resource [bus 00-fe]
[ 0.603355] pci 0000:00:00.0: [8086:a704] type 00 class 0x060000
[ 0.603450] pci 0000:00:01.0: [8086:a70d] type 01 class 0x060400
[ 0.603499] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
[ 0.603515] pci 0000:00:01.0: PTM enabled (root), 4ns granularity
[ 0.603879] pci 0000:00:01.1: [8086:a72d] type 01 class 0x060400
[ 0.603926] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold
[ 0.603941] pci 0000:00:01.1: PTM enabled (root), 4ns granularity
[ 0.604346] pci 0000:00:02.0: [8086:a780] type 00 class 0x038000
[ 0.604353] pci 0000:00:02.0: reg 0x10: [mem 0x60eb000000-0x60ebffffff 64bit]
[ 0.604358] pci 0000:00:02.0: reg 0x18: [mem 0x4000000000-0x400fffffff 64bit pref]
[ 0.604361] pci 0000:00:02.0: reg 0x20: [io 0x5000-0x503f]
[ 0.604373] pci 0000:00:02.0: DMAR: Skip IOMMU disabling for graphics
[ 0.604394] pci 0000:00:02.0: reg 0x344: [mem 0x60e4000000-0x60e4ffffff 64bit]
[ 0.604395] pci 0000:00:02.0: VF(n) BAR0 space: [mem 0x60e4000000-0x60eaffffff 64bit] (contains BAR0 for 7 VFs)
[ 0.604398] pci 0000:00:02.0: reg 0x34c: [mem 0x6000000000-0x601fffffff 64bit pref]
[ 0.604399] pci 0000:00:02.0: VF(n) BAR2 space: [mem 0x6000000000-0x60dfffffff 64bit pref] (contains BAR2 for 7 VFs)
[ 0.604509] pci 0000:00:06.0: [8086:a74d] type 01 class 0x060400
[ 0.604584] pci 0000:00:06.0: PME# supported from D0 D3hot D3cold
[ 0.604608] pci 0000:00:06.0: PTM enabled (root), 4ns granularity
[ 0.605019] pci 0000:00:0a.0: [8086:a77d] type 00 class 0x118000
[ 0.605026] pci 0000:00:0a.0: reg 0x10: [mem 0x60ec110000-0x60ec117fff 64bit]
[ 0.605045] pci 0000:00:0a.0: enabling Extended Tags
[ 0.605155] pci 0000:00:14.0: [8086:7ae0] type 00 class 0x0c0330
[ 0.605173] pci 0000:00:14.0: reg 0x10: [mem 0x60ec100000-0x60ec10ffff 64bit]
[ 0.605249] pci 0000:00:14.0: PME# supported from D3hot D3cold
[ 0.606996] pci 0000:00:14.2: [8086:7aa7] type 00 class 0x050000
[ 0.607016] pci 0000:00:14.2: reg 0x10: [mem 0x60ec124000-0x60ec127fff 64bit]
[ 0.607029] pci 0000:00:14.2: reg 0x18: [mem 0x60ec12c000-0x60ec12cfff 64bit]
[ 0.607207] pci 0000:00:15.0: [8086:7acc] type 00 class 0x0c8000
[ 0.607235] pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit]
[ 0.607683] pci 0000:00:15.1: [8086:7acd] type 00 class 0x0c8000
[ 0.607711] pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit]
[ 0.608093] pci 0000:00:16.0: [8086:7ae8] type 00 class 0x078000
[ 0.608115] pci 0000:00:16.0: reg 0x10: [mem 0x60ec129000-0x60ec129fff 64bit]
[ 0.608200] pci 0000:00:16.0: PME# supported from D3hot
[ 0.608608] pci 0000:00:16.3: [8086:7aeb] type 00 class 0x070002
[ 0.608622] pci 0000:00:16.3: reg 0x10: [io 0x50a0-0x50a7]
[ 0.608630] pci 0000:00:16.3: reg 0x14: [mem 0x84024000-0x84024fff]
[ 0.608768] pci 0000:00:17.0: [8086:7ae2] type 00 class 0x010601
[ 0.608781] pci 0000:00:17.0: reg 0x10: [mem 0x84020000-0x84021fff]
[ 0.608789] pci 0000:00:17.0: reg 0x14: [mem 0x84023000-0x840230ff]
[ 0.608797] pci 0000:00:17.0: reg 0x18: [io 0x5090-0x5097]
[ 0.608805] pci 0000:00:17.0: reg 0x1c: [io 0x5080-0x5083]
[ 0.608813] pci 0000:00:17.0: reg 0x20: [io 0x5060-0x507f]
[ 0.608821] pci 0000:00:17.0: reg 0x24: [mem 0x84022000-0x840227ff]
[ 0.608862] pci 0000:00:17.0: PME# supported from D3hot
[ 0.609084] pci 0000:00:1a.0: [8086:7ac8] type 01 class 0x060400
[ 0.609198] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
[ 0.609240] pci 0000:00:1a.0: PTM enabled (root), 4ns granularity
[ 0.609697] pci 0000:00:1b.0: [8086:7ac0] type 01 class 0x060400
[ 0.610204] pci 0000:00:1b.4: [8086:7ac4] type 01 class 0x060400
[ 0.610316] pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold
[ 0.610358] pci 0000:00:1b.4: PTM enabled (root), 4ns granularity
[ 0.610770] pci 0000:00:1c.0: [8086:7ab8] type 01 class 0x060400
[ 0.610876] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
[ 0.610911] pci 0000:00:1c.0: PTM enabled (root), 4ns granularity
[ 0.611311] pci 0000:00:1c.1: [8086:7ab9] type 01 class 0x060400
[ 0.611413] pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold
[ 0.611448] pci 0000:00:1c.1: PTM enabled (root), 4ns granularity
[ 0.611877] pci 0000:00:1c.3: [8086:7abb] type 01 class 0x060400
[ 0.611983] pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold
[ 0.612018] pci 0000:00:1c.3: PTM enabled (root), 4ns granularity
[ 0.612438] pci 0000:00:1c.4: [8086:7abc] type 01 class 0x060400
[ 0.612543] pci 0000:00:1c.4: PME# supported from D0 D3hot D3cold
[ 0.612579] pci 0000:00:1c.4: PTM enabled (root), 4ns granularity
[ 0.612974] pci 0000:00:1d.0: [8086:7ab0] type 01 class 0x060400
[ 0.613081] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
[ 0.613116] pci 0000:00:1d.0: PTM enabled (root), 4ns granularity
[ 0.613519] pci 0000:00:1f.0: [8086:7a88] type 00 class 0x060100
[ 0.613817] pci 0000:00:1f.3: [8086:7ad0] type 00 class 0x040300
[ 0.613859] pci 0000:00:1f.3: reg 0x10: [mem 0x60ec120000-0x60ec123fff 64bit]
[ 0.613913] pci 0000:00:1f.3: reg 0x20: [mem 0x60ec000000-0x60ec0fffff 64bit]
[ 0.614017] pci 0000:00:1f.3: PME# supported from D3hot D3cold
[ 0.614096] pci 0000:00:1f.4: [8086:7aa3] type 00 class 0x0c0500
[ 0.614127] pci 0000:00:1f.4: reg 0x10: [mem 0x60ec128000-0x60ec1280ff 64bit]
[ 0.614159] pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf]
[ 0.614347] pci 0000:00:1f.5: [8086:7aa4] type 00 class 0x0c8000
[ 0.614366] pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff]
[ 0.614497] pci 0000:00:1f.6: [8086:1a1c] type 00 class 0x020000
[ 0.614523] pci 0000:00:1f.6: reg 0x10: [mem 0x84000000-0x8401ffff]
[ 0.614646] pci 0000:00:1f.6: PME# supported from D0 D3hot D3cold
[ 0.614844] pci 0000:01:00.0: [1000:0097] type 00 class 0x010700
[ 0.614853] pci 0000:01:00.0: reg 0x10: [io 0x4000-0x40ff]
[ 0.614861] pci 0000:01:00.0: reg 0x14: [mem 0x83400000-0x8340ffff 64bit]
[ 0.614868] pci 0000:01:00.0: reg 0x1c: [mem 0x83200000-0x832fffff 64bit]
[ 0.614878] pci 0000:01:00.0: reg 0x30: [mem 0x83100000-0x831fffff pref]
[ 0.614882] pci 0000:01:00.0: enabling Extended Tags
[ 0.614947] pci 0000:01:00.0: supports D1 D2
[ 0.614970] pci 0000:01:00.0: reg 0x174: [mem 0x83300000-0x8330ffff 64bit]
[ 0.614971] pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x83300000-0x833fffff 64bit] (contains BAR0 for 16 VFs)
[ 0.614978] pci 0000:01:00.0: reg 0x17c: [mem 0x82100000-0x821fffff 64bit]
[ 0.614979] pci 0000:01:00.0: VF(n) BAR2 space: [mem 0x82100000-0x830fffff 64bit] (contains BAR2 for 16 VFs)
[ 0.615086] pci 0000:00:01.0: PCI bridge to [bus 01]
[ 0.615089] pci 0000:00:01.0: bridge window [io 0x4000-0x4fff]
[ 0.615090] pci 0000:00:01.0: bridge window [mem 0x82100000-0x834fffff]
[ 0.615194] pci 0000:02:00.0: [15b3:1013] type 00 class 0x020000
[ 0.615295] pci 0000:02:00.0: reg 0x10: [mem 0x60e2000000-0x60e3ffffff 64bit pref]
[ 0.615449] pci 0000:02:00.0: reg 0x30: [mem 0x83b00000-0x83bfffff pref]
[ 0.615941] pci 0000:02:00.0: PME# supported from D3cold
[ 0.616298] pci 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x8 link at 0000:00:01.1 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link)
[ 0.616663] pci 0000:02:00.1: [15b3:1013] type 00 class 0x020000
[ 0.616763] pci 0000:02:00.1: reg 0x10: [mem 0x60e0000000-0x60e1ffffff 64bit pref]
[ 0.616918] pci 0000:02:00.1: reg 0x30: [mem 0x83a00000-0x83afffff pref]
[ 0.617368] pci 0000:02:00.1: PME# supported from D3cold
[ 0.617877] pci 0000:00:01.1: PCI bridge to [bus 02]
[ 0.617880] pci 0000:00:01.1: bridge window [mem 0x83a00000-0x83bfffff]
[ 0.617882] pci 0000:00:01.1: bridge window [mem 0x60e0000000-0x60e3ffffff 64bit pref]
[ 0.617952] pci 0000:03:00.0: [144d:a808] type 00 class 0x010802
[ 0.617967] pci 0000:03:00.0: reg 0x10: [mem 0x83f00000-0x83f03fff 64bit]
[ 0.618154] pci 0000:00:06.0: PCI bridge to [bus 03]
[ 0.618156] pci 0000:00:06.0: bridge window [mem 0x83f00000-0x83ffffff]
[ 0.618305] pci 0000:04:00.0: [144d:a80a] type 00 class 0x010802
[ 0.618329] pci 0000:04:00.0: reg 0x10: [mem 0x83e00000-0x83e03fff 64bit]
[ 0.618680] pci 0000:00:1a.0: PCI bridge to [bus 04]
[ 0.618684] pci 0000:00:1a.0: bridge window [mem 0x83e00000-0x83efffff]
[ 0.618736] pci 0000:00:1b.0: PCI bridge to [bus 05]
[ 0.618881] pci 0000:06:00.0: [144d:a80a] type 00 class 0x010802
[ 0.618905] pci 0000:06:00.0: reg 0x10: [mem 0x83d00000-0x83d03fff 64bit]
[ 0.619258] pci 0000:00:1b.4: PCI bridge to [bus 06]
[ 0.619262] pci 0000:00:1b.4: bridge window [mem 0x83d00000-0x83dfffff]
[ 0.619329] pci 0000:07:00.0: [1283:8893] type 01 class 0x060401
[ 0.619492] pci 0000:07:00.0: supports D1 D2
[ 0.619492] pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[ 0.619553] pci 0000:00:1c.0: PCI bridge to [bus 07-08]
[ 0.619608] pci_bus 0000:08: extended config space not accessible
[ 0.619701] pci 0000:07:00.0: PCI bridge to [bus 08] (subtractive decode)
[ 0.619832] pci 0000:09:00.0: [8086:15f2] type 00 class 0x020000
[ 0.619855] pci 0000:09:00.0: reg 0x10: [mem 0x83600000-0x836fffff]
[ 0.619891] pci 0000:09:00.0: reg 0x1c: [mem 0x83700000-0x83703fff]
[ 0.619927] pci 0000:09:00.0: reg 0x30: [mem 0x83500000-0x835fffff pref]
[ 0.620034] pci 0000:09:00.0: PME# supported from D0 D3hot D3cold
[ 0.620241] pci 0000:00:1c.1: PCI bridge to [bus 09]
[ 0.620246] pci 0000:00:1c.1: bridge window [mem 0x83500000-0x837fffff]
[ 0.620305] pci 0000:0a:00.0: [1a03:1150] type 01 class 0x060400
[ 0.620375] pci 0000:0a:00.0: enabling Extended Tags
[ 0.620464] pci 0000:0a:00.0: supports D1 D2
[ 0.620465] pci 0000:0a:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[ 0.620627] pci 0000:00:1c.3: PCI bridge to [bus 0a-0b]
[ 0.620630] pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff]
[ 0.620632] pci 0000:00:1c.3: bridge window [mem 0x81000000-0x820fffff]
[ 0.620682] pci_bus 0000:0b: extended config space not accessible
[ 0.620699] pci 0000:0b:00.0: [1a03:2000] type 00 class 0x030000
[ 0.620721] pci 0000:0b:00.0: reg 0x10: [mem 0x81000000-0x81ffffff]
[ 0.620733] pci 0000:0b:00.0: reg 0x14: [mem 0x82000000-0x8203ffff]
[ 0.620745] pci 0000:0b:00.0: reg 0x18: [io 0x3000-0x307f]
[ 0.620800] pci 0000:0b:00.0: BAR 0: assigned to efifb
[ 0.620808] pci 0000:0b:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[ 0.620850] pci 0000:0b:00.0: supports D1 D2
[ 0.620850] pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[ 0.620944] pci 0000:0a:00.0: PCI bridge to [bus 0b]
[ 0.620951] pci 0000:0a:00.0: bridge window [io 0x3000-0x3fff]
[ 0.620955] pci 0000:0a:00.0: bridge window [mem 0x81000000-0x820fffff]
[ 0.621056] pci 0000:0c:00.0: [144d:a808] type 00 class 0x010802
[ 0.621080] pci 0000:0c:00.0: reg 0x10: [mem 0x83c00000-0x83c03fff 64bit]
[ 0.621405] pci 0000:00:1c.4: PCI bridge to [bus 0c]
[ 0.621409] pci 0000:00:1c.4: bridge window [mem 0x83c00000-0x83cfffff]
[ 0.621467] pci 0000:0d:00.0: [1179:010e] type 00 class 0x010802
[ 0.621487] pci 0000:0d:00.0: reg 0x10: [mem 0x83800000-0x838fffff 64bit]
[ 0.621519] pci 0000:0d:00.0: reg 0x30: [mem 0x83900000-0x8397ffff pref]
[ 0.621676] pci 0000:00:1d.0: PCI bridge to [bus 0d]
[ 0.621680] pci 0000:00:1d.0: bridge window [mem 0x83800000-0x839fffff]
[ 0.625951] ACPI: PCI: Interrupt link LNKA configured for IRQ 0
[ 0.626035] ACPI: PCI: Interrupt link LNKB configured for IRQ 1
[ 0.626118] ACPI: PCI: Interrupt link LNKC configured for IRQ 0
[ 0.626205] ACPI: PCI: Interrupt link LNKD configured for IRQ 0
[ 0.626288] ACPI: PCI: Interrupt link LNKE configured for IRQ 0
[ 0.626370] ACPI: PCI: Interrupt link LNKF configured for IRQ 0
[ 0.626452] ACPI: PCI: Interrupt link LNKG configured for IRQ 0
[ 0.626535] ACPI: PCI: Interrupt link LNKH configured for IRQ 0
[ 0.633153] iommu: Default domain type: Translated
[ 0.633153] iommu: DMA domain TLB invalidation policy: lazy mode
[ 0.633198] SCSI subsystem initialized
[ 0.633202] libata version 3.00 loaded.
[ 0.633202] ACPI: bus type USB registered
[ 0.633202] usbcore: registered new interface driver usbfs
[ 0.633202] usbcore: registered new interface driver hub
[ 0.633202] usbcore: registered new device driver usb
[ 0.633202] pps_core: LinuxPPS API ver. 1 registered
[ 0.633202] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti
zpool events -v
: events.logSo far, after downgrading to 2.2.0 (had to build the RPMs from source for Fedora 39), the issue seems to have disappeared. dmesg
no longer reports an zfs errors and the zpool status
write errors column got reset to 0 (without having run zpool clear
). I'm running a scrub now and will report back tomorrow when it completes.
Also, I just noticed I'm also using a SLOG and L2ARC like @Rudd-O. So it looks like both our setups have these things in common: Fedora + kernel 6.5 + LUKS encrypted disks + striped mirrors + SLOG + L2ARC + ECC memory.
In case it matters at all, it doesn't look like I've ever (intentionally or inadvertently) used block cloning:
[chenxiaolong@sm-1]~% zpool get all satapool0 | grep bclone
satapool0 bcloneused 0 -
satapool0 bclonesaved 0 -
satapool0 bcloneratio 1.00x -
Same here, WRITE
errors across all vdev
after upgrade from 2.2.0
to 2.2.1
on Ubuntu 22.04 @ 6.2.0-37
kernel.
# zpool status
pool: zfs
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
scan: resilvered 1.64M in 00:00:00 with 0 errors on Wed Nov 22 17:51:00 2023
config:
NAME STATE READ WRITE CKSUM
zfs ONLINE 0 0 0
mirror-0 ONLINE 0 2 0
D01_89BL ONLINE 0 2 0
D02_9WYL ONLINE 0 2 0
mirror-1 ONLINE 0 4 0
D03_89YL ONLINE 0 4 0
D04_DA9L ONLINE 0 4 0
mirror-2 ONLINE 0 1 0
D05_P8JL ONLINE 0 1 0
D06_8Y7L ONLINE 0 1 0
mirror-3 ONLINE 0 1 0
D07_5B5J ONLINE 0 1 0
D08_94SL ONLINE 0 1 0
logs
mirror-4 ONLINE 0 0 0
SLOG_01 ONLINE 0 0 0
SLOG_02 ONLINE 0 0 0
cache
L2ARC_01 ONLINE 0 0 0
L2ARC_02 ONLINE 0 0 0
errors: No known data errors
# dmesg | grep zio
[ 1408.819308] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1116851789824 size=4096 flags=1572992
[ 1408.819329] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1116851789824 size=4096 flags=1572992
[ 1408.819334] zio pool=zfs vdev=/dev/mapper/D08_94SL error=5 type=2 offset=1082471825408 size=4096 flags=1572992
[ 1408.819726] zio pool=zfs vdev=/dev/mapper/D07_5B5J error=5 type=2 offset=1082471825408 size=4096 flags=1572992
[ 1408.845815] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1116851789824 size=4096 flags=1605761
[ 1408.845822] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1116851789824 size=4096 flags=1605761
[ 1408.845825] zio pool=zfs vdev=/dev/mapper/D07_5B5J error=5 type=2 offset=1082471825408 size=4096 flags=1605761
[ 1408.845832] zio pool=zfs vdev=/dev/mapper/D08_94SL error=5 type=2 offset=1082471825408 size=4096 flags=1605761
[26421.390577] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1572992
[26421.390589] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1572992
[26421.433580] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1605761
[26421.433604] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1605761
[26421.473379] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1572992
[26421.473387] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1572992
[26421.497776] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1605761
[26421.497791] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1605761
[26431.662076] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299944960 size=4096 flags=1572992
[26431.662082] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299944960 size=4096 flags=1572992
[26431.681016] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299944960 size=4096 flags=1605761
[26431.681026] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299944960 size=4096 flags=1605761
[26432.250116] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299994112 size=4096 flags=1572992
[26432.250120] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299994112 size=4096 flags=1572992
[26432.272795] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299994112 size=4096 flags=1605761
[26432.272802] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299994112 size=4096 flags=1605761
[29823.446168] zio pool=zfs vdev=/dev/mapper/D05_P8JL error=5 type=2 offset=860650754048 size=4096 flags=1572992
[29823.446183] zio pool=zfs vdev=/dev/mapper/D06_8Y7L error=5 type=2 offset=860650754048 size=4096 flags=1572992
[29823.484845] zio pool=zfs vdev=/dev/mapper/D05_P8JL error=5 type=2 offset=860650754048 size=4096 flags=1605761
[29823.484859] zio pool=zfs vdev=/dev/mapper/D06_8Y7L error=5 type=2 offset=860650754048 size=4096 flags=1605761
[29823.503696] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1118135730176 size=4096 flags=1572992
[29823.503707] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1118135730176 size=4096 flags=1572992
[29823.532217] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1118135730176 size=4096 flags=1605761
[29823.532228] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1118135730176 size=4096 flags=1605761
With this bug in 2.2.1 and the block cloning bug in 2.2.0 I guess I'll continue putting off upgrading to 2.2.x and Fedora 39/FreeBSD 14. Both of these are the most serious bugs I've personally noticed making it into a released zfs version, and it happened two releases in a row.
Could 2.2.2 be made into a small bug fixing release instead of a normal release so there's more confidence in getting a trustworthy 2.2.x version?
Same here,
WRITE
errors across allvdev
after upgrade from2.2.0
to2.2.1
on Ubuntu 22.04 @6.2.0-37
kernel.# zpool status pool: zfs state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P scan: resilvered 1.64M in 00:00:00 with 0 errors on Wed Nov 22 17:51:00 2023 config: NAME STATE READ WRITE CKSUM zfs ONLINE 0 0 0 mirror-0 ONLINE 0 2 0 D01_89BL ONLINE 0 2 0 D02_9WYL ONLINE 0 2 0 mirror-1 ONLINE 0 4 0 D03_89YL ONLINE 0 4 0 D04_DA9L ONLINE 0 4 0 mirror-2 ONLINE 0 1 0 D05_P8JL ONLINE 0 1 0 D06_8Y7L ONLINE 0 1 0 mirror-3 ONLINE 0 1 0 D07_5B5J ONLINE 0 1 0 D08_94SL ONLINE 0 1 0 logs mirror-4 ONLINE 0 0 0 SLOG_01 ONLINE 0 0 0 SLOG_02 ONLINE 0 0 0 cache L2ARC_01 ONLINE 0 0 0 L2ARC_02 ONLINE 0 0 0 errors: No known data errors
# dmesg | grep zio [ 1408.819308] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1116851789824 size=4096 flags=1572992 [ 1408.819329] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1116851789824 size=4096 flags=1572992 [ 1408.819334] zio pool=zfs vdev=/dev/mapper/D08_94SL error=5 type=2 offset=1082471825408 size=4096 flags=1572992 [ 1408.819726] zio pool=zfs vdev=/dev/mapper/D07_5B5J error=5 type=2 offset=1082471825408 size=4096 flags=1572992 [ 1408.845815] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1116851789824 size=4096 flags=1605761 [ 1408.845822] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1116851789824 size=4096 flags=1605761 [ 1408.845825] zio pool=zfs vdev=/dev/mapper/D07_5B5J error=5 type=2 offset=1082471825408 size=4096 flags=1605761 [ 1408.845832] zio pool=zfs vdev=/dev/mapper/D08_94SL error=5 type=2 offset=1082471825408 size=4096 flags=1605761 [26421.390577] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1572992 [26421.390589] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1572992 [26421.433580] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1605761 [26421.433604] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1605761 [26421.473379] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1572992 [26421.473387] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1572992 [26421.497776] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299887616 size=4096 flags=1605761 [26421.497791] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299887616 size=4096 flags=1605761 [26431.662076] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299944960 size=4096 flags=1572992 [26431.662082] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299944960 size=4096 flags=1572992 [26431.681016] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299944960 size=4096 flags=1605761 [26431.681026] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299944960 size=4096 flags=1605761 [26432.250116] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299994112 size=4096 flags=1572992 [26432.250120] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299994112 size=4096 flags=1572992 [26432.272795] zio pool=zfs vdev=/dev/mapper/D03_89YL error=5 type=2 offset=5850299994112 size=4096 flags=1605761 [26432.272802] zio pool=zfs vdev=/dev/mapper/D04_DA9L error=5 type=2 offset=5850299994112 size=4096 flags=1605761 [29823.446168] zio pool=zfs vdev=/dev/mapper/D05_P8JL error=5 type=2 offset=860650754048 size=4096 flags=1572992 [29823.446183] zio pool=zfs vdev=/dev/mapper/D06_8Y7L error=5 type=2 offset=860650754048 size=4096 flags=1572992 [29823.484845] zio pool=zfs vdev=/dev/mapper/D05_P8JL error=5 type=2 offset=860650754048 size=4096 flags=1605761 [29823.484859] zio pool=zfs vdev=/dev/mapper/D06_8Y7L error=5 type=2 offset=860650754048 size=4096 flags=1605761 [29823.503696] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1118135730176 size=4096 flags=1572992 [29823.503707] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1118135730176 size=4096 flags=1572992 [29823.532217] zio pool=zfs vdev=/dev/mapper/D01_89BL error=5 type=2 offset=1118135730176 size=4096 flags=1605761 [29823.532228] zio pool=zfs vdev=/dev/mapper/D02_9WYL error=5 type=2 offset=1118135730176 size=4096 flags=1605761
Are you also running ZFS on top of LUKS? Asking since I see /dev/mapper/ devices.
I'm deffo LUKS but the copy paste above from our friends who have repro'd the bug doesn't seem like it's LUKS.
Gotta say that my heart almost came out thru my esophagus when I got Alertmanager alerts about various drives in several machines popping off. If anyone is interested, I'm using https://github.com/Rudd-O/zfs-stats-exporter plus Node Exporter, and the following alerting rules for ZFS:
- alert: PoolUnhealthy
expr: zfs_pool_healthy == 0
for: 10s
annotations:
summary: '{{ $labels.zpool }} in {{ $labels.instance }} is degraded or faulted'
- alert: PoolBadState
expr: |
node_zfs_zpool_state{state!="online"} == 1
for: 10s
annotations:
summary: '{{ $labels.zpool }} in {{ $labels.instance }} is in state {{ $labels.state }}'
- alert: PoolErrored
expr: zfs_pool_errors_total > 0
for: 10s
annotations:
summary: '{{ $labels.zpool }} in {{ $labels.instance }} has had {{ $value }} {{ $labels.class }} errors'
None of my drives tripped the SMART rules:
- alert: DiskHot
expr: smartmon_temperature_celsius_raw_value >= 60
for: 60s
annotations:
summary: '{{ $labels.device }} in {{ $labels.instance }} at {{ $value }}°C'
- alert: SMARTUnhealthy
expr: smartmon_device_smart_healthy == 0
for: 10s
- alert: SMARTUncorrectableSectorsFound
expr: smartmon_offline_uncorrectable_raw_value > 0
for: 10s
annotations:
summary: '{{ $value }} bad sectors on {{ $labels.device }} in {{ $labels.instance }}'
- alert: SMARTPendingSectorsFound
expr: smartmon_current_pending_sector_raw_value > 0
for: 10s
annotations:
summary: '{{ $value }} pending sectors on {{ $labels.device }} in {{ $labels.instance }}'
- alert: SMARTReallocatedSectorsCountHigh
expr: smartmon_reallocated_sector_ct_raw_value > 5
for: 10s
annotations:
summary: '{{ $value }} reallocated sectors on {{ $labels.device }} in {{ $labels.instance }}'
- alert: SMARTUDMACRCErrorCountHigh
expr: smartmon_udma_crc_error_count_raw_value > 5
for: 10s
annotations:
summary: '{{ $value }} CRC errors on {{ $labels.device }} in {{ $labels.instance }}'
- alert: SMARTAttributeAtOrBelowThreshold
expr: '{__name__=~"smartmon_.*_value", __name__!~"smartmon_.*_raw_value", __name__!~".*power_on_hours.*"} <= {__name__=~"smartmon_.*_threshold"}'
for: 10s
Maybe I'm missing something but everyone who confirmed the bug in here are running ZFS on top of LUKS. blind-oracle hasn't confirmed it but given his device paths resides in dev/mapper I would guess he is as well.
Maybe some funky interaction with device-mapper?
@broizter Yes, it's running on top of LUKS since it's much faster than built-in encryption. So yeah, might be some device-mapper related bug which is absent in 2.2.0
I had the same issue using zfs 2.2.1 with LUKS, linux 6.6.2.
It seems all are write errors, and it currently shows
root DEGRADED 0 43.3K 0 too many errors
Yep. 2.2.1 has that problem too (kernel 6.5). Reverting to 2.2.0 now.
So we know master at the commit in the description, and 2.2.1 both share the issue.
Same WRITE error issue on my laptop with two single-disk zpools on LUKS.
pool: zlaptop_hdd
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
scan: resilvered 672K in 00:00:01 with 0 errors on Wed Nov 22 23:33:31 2023
config:
NAME STATE READ WRITE CKSUM
zlaptop_hdd ONLINE 0 0 0
/dev/mapper/laptop_hdd ONLINE 0 64 0
errors: No known data errors
pool: zlaptop_ssd
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
scan: resilvered 2.66M in 00:00:00 with 0 errors on Thu Nov 23 00:05:20 2023
config:
NAME STATE READ WRITE CKSUM
zlaptop_ssd ONLINE 0 0 0
/dev/mapper/laptop_ssd-data ONLINE 0 28 0
errors: No known data errors
[ 32.610897] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=785984929792 size=24576 flags=1074267264
[ 32.649825] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=785984929792 size=4096 flags=1605761
[ 32.649943] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=51550085120 size=4096 flags=1605761
[ 32.651301] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=730149486592 size=4096 flags=1605761
[ 32.653041] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=51550203904 size=4096 flags=1589376
[ 32.653041] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=730149625856 size=4096 flags=1589376
[ 32.653054] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=785985052672 size=4096 flags=1589376
[ 32.654378] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=730149625856 size=4096 flags=1605761
[ 32.654479] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=51550203904 size=4096 flags=1605761
[ 32.654488] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=785985052672 size=4096 flags=1605761
[ 43.455629] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=730149642240 size=4096 flags=1589376
[ 43.455653] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=785985249280 size=8192 flags=1074267264
[ 43.455664] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=51550224384 size=4096 flags=1589376
[ 43.529133] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=730149642240 size=4096 flags=1605761
[ 43.529150] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=785985249280 size=4096 flags=1605761
[ 43.529283] zio pool=zlaptop_hdd vdev=/dev/mapper/laptop_hdd error=5 type=2 offset=51550224384 size=4096 flags=1605761
[ 259.390004] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=451307106304 size=4096 flags=1589376
[ 259.390031] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=262020669440 size=4096 flags=1589376
[ 259.394231] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=262020669440 size=4096 flags=1605761
[ 259.394309] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=451307106304 size=4096 flags=1605761
[ 300.313484] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=451348086784 size=4096 flags=1589376
[ 300.313540] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=262022987776 size=4096 flags=1589376
[ 300.319265] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=262022987776 size=4096 flags=1605761
[ 300.320045] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=274903932928 size=45056 flags=1074267264
[ 300.321407] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=451348086784 size=4096 flags=1605761
[ 300.322566] zio pool=zlaptop_ssd vdev=/dev/mapper/laptop_ssd-data error=5 type=2 offset=274903973888 size=4096 flags=1605761
Running zfs 2.2.1
- upgraded from 2.2.0
on Void Linux, custom 6.1.62
Linux kernel. The disks have 512 logical sector size, but I manually formatted LUKS to use 4096 sector size since it's optimal. The parititons are aligned. My zpools are created with ashift=12
.
Since others have noted potential corruption issues with edit: strike-through unrelatedzfs_dmu_offset_next_sync=1
https://github.com/openzfs/zfs/issues/15526#issuecomment-1823737998, I have also tried setting it to 0
, and I am still getting new errors.
The disks have 512 logical sector size, but I manually formatted LUKS to use 4096 sector size since it's optimal. The parititons are aligned. My zpools are created with ashift=12.
Same here. Important data points!
The disks have 512 logical sector size, but I manually formatted LUKS to use 4096 sector size since it's optimal. The parititons are aligned. My zpools are created with ashift=12.
Same here. Important data points!
Me as well. All of my LUKS volumes are formatted as LUKS2 with a 4 KiB sector size (including the ones backing SLOG and L2ARC).
In my case all devices are native 4k (SLOG/ARC and spinning disks), so probably it does not matter much.
@Rudd-O I'd rename the issue, it's more like write errors than data corruption I think. At least downgrading to 2.2.0 and doing scrub
shows no errors.
@Rudd-O since you're able to reproduce this would it be possible for you to bisect the commit between 2.2.0 and 2.2.1 which introduced this so we can get it reverted.
It would be possible, but it would be very scary.
I need a few hours to think about it. I truly do want to help this project find this issue — of all the projects I've participated in, this is the one I'm most gung-ho about.
My main machine (described above) completed the scrub successfully overnight and shows no corruption after downgrading to 2.2.0. Verifying externally computed checksums shows the same. No need to restore from backup, phew.
Unfortunately, I forgot to mask 2.2.1 on my backup machine, so that got 2.2.1 via automatic updates and is now encountering the same issue. I'll take the chance and use this box to do a bisect between 2.2.0 and 2.2.1 to find the commit that introduced the problem.
Looks like the problematic commit is bd7a02c251d8c119937e847d5161b512913667e6. If I revert that commit on top of the zfs-2.2.1
tag, then the issue does not occur.
git bisect log
Also, while looking for a reliable way to reproduce this, I noticed that plain old writes, like:
sudo dd if=/dev/urandom of=/test/test.bin bs=1M count=102400 status=progress
weren't sufficient. That never triggered the WRITE
errors. However, I had a 100% reproducibility success rate by receiving zfs snapshots. I saw errors immediately after zfs recv
runs. I use 10-minutely snapshot + send/recv jobs via zrepl.
EDIT: Since this seems to be related to block sizes or alignment, this is what my systems have (on both machines that experienced the problem):
Both my SSD and LUKS are 4096 bytes and ashift is 12. So it seems LUKS rejected non-block size write? Which could happened in random write but not sequential write. And it does not affect 512 bytes size which fit in the new cache block size.
LUKS may aid in exposing this, but it's not the central culprit, nor is it rejecting writes (or we have evidence thereof).
I also noticed that my WRITE and CKSUM errors went up when manipulating snapshots (creating / deleting). Writes by themselves indeed do not trigger the issue — I can personally confirm that no amount of regular writes appear to cause WRITE errors by themselves.
Someone with more experience in C than me will have to figure out what went wrong in that commit.
I have the same hardware specs that @chenxiaolong mentioned . All my drives are also LUKS formatted with 4K sector sizes, aligned. All my pools are ashift 12.
Also, while looking for a reliable way to reproduce this, I noticed that plain old writes, like:
sudo dd if=/dev/urandom of=/test/test.bin bs=1M count=102400 status=progress
weren't sufficient. That never triggered the
WRITE
errors. However, I had a 100% reproducibility success rate by receiving zfs snapshots. I saw errors immediately afterzfs recv
runs. I use 10-minutely snapshot + send/recv jobs via zrepl.
The zio error=5 type=2 reminded me of #14533, and while looking for that issue i found #15414 which has some hints on how to reproduce, might be helpful here.
The problem described in #15414 is quite possibly what's up here.
Briefly, vdev_disk
has some problems with how it fills Linux IO objects it submits to the block layer, such that if lower layers need to split them up, the pieces can end up misaligned. Since you've got a few layers and mismatched block sizes, there's probably a bit higher chance than usual to hit this. I don't know if that's what's happening here, but it seems plausible.
I don't think that patch there will help, nor do I think its the same trigger. If it was, it would be from large aggregated writes, and we'd see a very large ZIO in the error output. Still, you might try drastically lowering zfs_vdev_aggregation_limit
(make it, say, 131072
; if it "fixes" it, try raising it, if not, try lowering it further). This will reduce throughput considerably, but things might still work.
Another possible workaround is to lower spl_kmem_cache_slab_limit
down to something silly, like 256
. This will cause ZFS to use a different memory allocation strategy for small objects (like individual disk blocks). Its probably slower, and may put more memory pressure on the system, but it also might work.
I should be posting a significant rework of vdev_disk
in a few days (it was written for a client, and is just finishing testing). That at least will fix up the problem described in #15414, and I suspect this too. But I don't really recommend waiting when a revert will sort it out for now; its not a given that my patch will be right, or be accepted.
@robn the bisect above points to #15452 from @amotin. Could that make the misalignment happen more often?
Possibly, though its not a direct cause.
Slab allocators can allocate objects that cross memory page boundaries (correct operation). So a single 4K allocation could be spread across two memory pages.
A Linux IO object (struct bio
) gets loaded up with "segments", and submitted. vdev_disk
will never load an allocation spanning memory pages into the same segment. In our example, it will load the first 2K into the first segment, the second 2K into the second segment. This isn't a problem as such; the entire IO is still 4K-aligned, so it all works well.
So you load lots of these "split" allocations into an IO, many tens of them, and then you send them down to the block layer. The next layer may decide it needs to split the IO into two, usually because the next layer down has lower size or segment limits. So it picks a spot and chops.
If it manages to land between two of those "half" segments, then the resulting IO objects are now individually misaligned. Some drivers don't care. NVMe tends not to, SCSI does. Maybe LUKS does too.
So back to the original question. #15452 effectively makes it so there's fewer memory slabs to allocate from. Its a good change. But, it means that any individual slab is now allocating more objects, which means more chance to cross a page boundary, which means more chance to have "splits" in the IO object, and so more chance that they'll be hit.
I have no evidence of this for this specific issue, but that's the thought process anyway.
I am also experiencing this error. Gentoo with custom 6.5.10 kernel, 6 disks in RAIDZ2 with L2ARC and ZIL, all over LUKS with LUKS 2.6.1. I upgraded to ZFS 2.2.0, and directly after to 2.2.1. One of the disks started to show write faults, but is now confirmed to be good (tried 3 different controllers, disk itself shows no smart errors, rewrote the whole thing with zeroes and read it back, all good). A spare also immediately went to faulted, and a second spare as well.
As I upgraded the features already, I can only downgrade to 2.2.0, and I just don't know if #15526 is worse than the risk from this one. So far, only one disk is faulted and the pool works as expected (after two "resilvers" and one complete read to find potential corruptions from #15526). Any advise?
I can only downgrade to 2.2.0, and I just don't know if #15526 is worse than the risk from this one [...] Any advise?
Either revert the commit from 2.2.1, or backport the 2.2.1 mitigations to 2.2.0.
I have a branch that contains the backported commits, plus a change to the default of zfs_dmu_offset_next_sync
to 0 - which makes the bug hard to hit https://github.com/openzfs/zfs/issues/15526#issuecomment-1823737998.
From my understanding, simultaneously disabling block tuning and changing the tunable is unnecessary - that only the tunable change is needed. But, I will stay on the safe side until a new release comes out.
Here's a patch for zfs-2.2.0 I generated from my branch.
I've reverted https://github.com/openzfs/zfs/commit/bd7a02c251d8c119937e847d5161b512913667e6 against 2.2.1 and appear to still be suffering from this problem and that seems to have corrected the problem (failed to rebuild init in previous edit 🤦).
Scrubbed through cleanly, don't appear to have suffered data corruption or loss as a result of this.
@robn I believe both FreeBSD and Linux allocators should naturally align power-of-2 sized allocations to their size or page size. So 4KB allocations should never cross 4KB boundary. What I think #15452 changed though is how, for example, 6KB allocations are handled. Previously they were aligned to page size. IMO it does not make much sense, since while it aligns the beginning of the buffer the end is still misaligned. My patch just formalizes that, allowing 2KB alignment in that case, saving memory, as you have noted, while allowing either begin or end of the buffer to be page-aligned. But it can be that it triggers some pre-existing issue in Linux vdev_disk implementation. I am still going to look on your #15588 PR, hoping one will fix it.
Thanks for identifying the problematic commit which aggravated this underlying problem. We'll get it reverted for 2.2 and work through sorting out a proper fix the bio alignment issue @robn identified on Linux.
Thanks for identifying the problematic commit which aggravated this underlying problem. We'll get it reverted for 2.2 and work through sorting out a proper fix the bio alignment issue @robn identified on Linux.
Hi Brian. On the note of reverting that commit for 2.2, do you have a rough idea as to the timeline of a 2.2.2 release?
Basically, I am a user on openSUSE Leap 15.5 and I upgraded to 2.2.1 as soon as it as packaged in the filesystems repo (I noticed that I am not the only one: #15583 ), and after about a day I randomly did "zpool status" and saw over 24.000 write errors and pretty much panicked and shut down my machine and have been living on my laptop for the past several days reading Github. Unfortunately, the filesystems repo deletes old package versions (And I had not set my package manager to save previously installed packages which I will do in the future), so I have not been able to simply downgrade and I am definitely not competent enough to revert commits and compile my own stuff. I contacted the package maintainer via email and very politely asked if he could revert the package version, and he did revert something in the repo, but it's been failing to build in OSS for days and I felt it would be rude for me to bug him about it again.
In short, 2.2.1 hosed me pretty badly, and I hope to Neptune that I have not actually lost any data (Oddly, I went for almost a day without noticing anything). The tragicomical thing was that 2.2.0 had been working wonderfully for me for several weeks (Since I built my new system) because the version of coreutils in Leap 15.5, which is 8.32, does not use reflinks by default, so when I checked the bclone stats, it was only 30 megabytes which went to 0 after 2.2.1.
Agreed. Getting something that “simply works” without throwing errors as soon as possible would be highly appreciated!
We're working on getting a 2.2.2 release tested, tagged, and released as soon as possible.
The revert is included in the 2.2.2 patchset (https://github.com/openzfs/zfs/pull/15602)
One question about the consequences of this bug/problem:
As I mentioned earlier I ran with 2.2.1 for about a day before running "zpool status" and noticing 24.000 write errors (No checksum errors). I assume that ZFS did manage to save data during that time as I was downloading some clips off of YouTube etc. without getting any obvious I/O errors at the application level or programs crashing or freezing.
Therefore: How concerned should I (and others) be about damage to existing data? Or is it only new data that might not be written correctly? Are there any steps that might be wise to take (Beyond an obvious scrub) once 2.2.2 is out and I get it installed? What about snapshot data?
@Tsuroerusu I haven't verified it, but AFAIK ZFS should retry failed operations once more without aggregation, that I suppose should not suffer from this issue. So there is a chance that your data may be intact and what you see is mostly a noise plus performance penalty. But scrub is the way to find out for sure.
@amotin Yes, it seems that all writes succeed in the end, at least ZFS hasn't thrown out any vdevs from my pools. And scrub after downgrading showed no errors. Though from where the CKSUM errors that @Rudd-O has come from - hard to say... I had only WRITE ones I think
@amotin Thanks for answering my question, I appreciate your time. :-) That was my hope, however I do remember it seeing something like "unrecoverable error, applications may be affected" and eventually it faulted one of the four drives in my pool (The pool has two mirrored vdevs, all enterprise-grade Samsung NVMe M.2 SSDs). I feel like if I had not stopped using the machine, the pool would eventually have failed entirely.
@blind-oracle In my case, as I mentioned to amotin, there were so many errors that one of the drives was faulted, which is very unlikely to have been representative of the drive's state because, 1. no actual Linux kernel write errors, LUKS errors etc. and 2. No errors in the NVMe log etc. etc. It is certainly down to 2.2.1 and the problem discussed in this thread.
I build and regularly test ZFS from the master branch. A few days go I built and tested the commit specified in the headline of this issue, deploying it to three machines.
On two of them (the ones that had mirrored pools), a data corruption issue arose where many WRITE errors (hundreds) would accumulate when deleting snapshots, but no CKSUM errors took place, nor was there evidence that hardware was the issue. I tried a scrub, and that just made the problem worse.
Initially I assumed I had gotten extremely unlucky and hardware was dying, because two mirrors of one leg were experiencing the issue, but none of the drives of the other leg were -- so I decided best to be safe and attach a third mirror drive to the first leg (that was $200, oof). Since I had no more drive bays, I popped the new drive into a USB port (USB 2.0!) and attached it to the first leg.
During the resilvering process, the third drive also began experiencing WRITE errors, and the first CKSUM errors.
I tried different kernels (6.4, 6.5 from Fedora) to no avail. The error was present either way. zpool clear was followed by a few errors whenever disks were written to, and hundreds of errors whenever snapshots were deleted (I have zfs-auto-snapshot running in the background).
Then, my backup machine began experiencing the same WRITE errors. I can't have this backup die on me, especially not that I have actual data corruption on the big data file server.
At this point I concluded there must be some serious issue with the code, and decided to downgrade all machines to a known-good build. After downgrading the most severely affected machine (whose logs are above) to my build of e47e9bbe86f2e8fe5da0fc7c3a9014e1f8c132a9, everything appears nominal and the resilvering is progressing without issues. Deleting snapshots also is no longer causing issues.
Nonetheless, I have forever lost what appears to be "who knows what" metadata, and of course four days trying to resilver unsuccessfully:
In conclusion, something added between e47e9bbe86f2e8fe5da0fc7c3a9014e1f8c132a9..786641dcf9a7e35f26a1b4778fc710c7ec0321bf is causing this issue.