Open-CAS / open-cas-linux

Open CAS Linux
https://open-cas.com
BSD 3-Clause "New" or "Revised" License
216 stars 82 forks source link

random hard lockup happened when dirty is full #1461

Closed zp001paul closed 5 months ago

zp001paul commented 7 months ago

vmcore-dmesg.txt vmcore-dmesg.txt

Description

My server experienced several panic during last a few months. If cache-line dirty is full, it panics in several days; If not, it survives much longer. In last panic, I printed all stacks of all CPU and found a hard lockup.

[167596.567274] NMI watchdog: Watchdog detected hard LOCKUP on cpu 16
[167596.567307] Modules linked in:
[167596.567744]  fuse btrfs raid6_pq xor vfat msdos fat ext4 mbcache jbd2 cas_cache(OE) cas_disk(OE) binfmt_misc tcp_diag udp_diag inet_diag xt_conntrack ipt_MASQUERADE nf_nat_masquerade_ipv4 nf_conntrack_netlink nfnetlink xt_addrtype iptable_filter iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack br_netfilter bridge stp llc overlay(T) bonding sunrpc iTCO_wdt iTCO_vendor_support nfit libnvdimm intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd pcspkr ses enclosure scsi_transport_sas sg mei_me joydev i2c_i801 mei wmi ipmi_si ipmi_devintf ipmi_msghandler pinctrl_lewisburg pinctrl_intel acpi_power_meter acpi_pad ip_tables xfs libcrc32c sd_mod crc_t10dif crct10dif_generic
[167596.571095]  ast i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm i40e ahci crct10dif_pclmul crct10dif_common ice(OE) libahci crc32c_intel libata serio_raw megaraid_sas auxiliary(OE) devlink ptp pps_core drm_panel_orientation_quirks dm_mirror dm_region_hash dm_log dm_mod
[167596.573073] CPU: 16 PID: 15492 Comm: hummsim Kdump: loaded Tainted: G           OE  ------------ T 3.10.0-1160.el7.x86_64 #1
[167596.573775] Hardware name: H3C R4300 G5/RS45M2C11SA, BIOS 5.41 01/07/2022
[167596.574486] Call Trace:
[167596.575193]  <NMI>  [<ffffffffbcd81340>] dump_stack+0x19/0x1b
[167596.575929]  [<ffffffffbc74edf5>] watchdog_overflow_callback+0x135/0x140
[167596.576670]  [<ffffffffbc7a88a7>] __perf_event_overflow+0x57/0x100
[167596.577409]  [<ffffffffbc7b20a4>] perf_event_overflow+0x14/0x20
[167596.578155]  [<ffffffffbc60a9b0>] handle_pmi_common+0x1a0/0x250
[167596.578899]  [<ffffffffbc988358>] ? ioremap_page_range+0x2e8/0x480
[167596.579649]  [<ffffffffbc8051b4>] ? vunmap_page_range+0x234/0x470
[167596.580404]  [<ffffffffbca4f796>] ? ghes_copy_tofrom_phys+0x116/0x210
[167596.581162]  [<ffffffffbc60ac8f>] intel_pmu_handle_irq+0xcf/0x1d0
[167596.581922]  [<ffffffffbcd8a031>] perf_event_nmi_handler+0x31/0x50
[167596.582688]  [<ffffffffbcd8b93c>] nmi_handle.isra.0+0x8c/0x150
[167596.583456]  [<ffffffffbcd8bc18>] do_nmi+0x218/0x460
[167596.584224]  [<ffffffffbcd8ad9c>] end_repeat_nmi+0x1e/0x81
[167596.585001]  [<ffffffffbc7175d2>] ? native_queued_spin_lock_slowpath+0x122/0x200
[167596.585783]  [<ffffffffbc7175d2>] ? native_queued_spin_lock_slowpath+0x122/0x200
[167596.586561]  [<ffffffffbc7175d2>] ? native_queued_spin_lock_slowpath+0x122/0x200
[167596.587329]  <EOE>  [<ffffffffbcd7b37b>] queued_spin_lock_slowpath+0xb/0xf // waiting for q->queue_lock
[167596.588116]  [<ffffffffbcd89998>] _raw_spin_lock_irq+0x28/0x30
[167596.588899]  [<ffffffffbc956eb8>] blk_queue_bio+0x88/0x400
[167596.589687]  [<ffffffffbc955167>] generic_make_request+0x147/0x380
[167596.590478]  [<ffffffffbc955410>] submit_bio+0x70/0x150
[167596.591272]  [<ffffffffbc88c325>] ? bio_alloc_bioset+0x115/0x310
[167596.592097]  [<ffffffffc0717283>] _xfs_buf_ioapply+0x2f3/0x460 [xfs]
[167596.592919]  [<ffffffffc0718e75>] ? _xfs_buf_read+0x25/0x30 [xfs]
[167596.593741]  [<ffffffffc0718c72>] __xfs_buf_submit+0x72/0x250 [xfs]
[167596.594574]  [<ffffffffc0749de9>] ? xfs_trans_read_buf_map+0xe9/0x2c0 [xfs]
[167596.595409]  [<ffffffffc0718e75>] _xfs_buf_read+0x25/0x30 [xfs]
[167596.596244]  [<ffffffffc0718f79>] xfs_buf_read_map+0xf9/0x160 [xfs]
[167596.597088]  [<ffffffffc0749de9>] xfs_trans_read_buf_map+0xe9/0x2c0 [xfs]
[167596.597936]  [<ffffffffc06f65a3>] xfs_da_read_buf+0xd3/0x120 [xfs]
[167596.598781]  [<ffffffffc06fcc36>] xfs_dir3_data_read+0x26/0x60 [xfs]
[167596.599632]  [<ffffffffc0701056>] xfs_dir2_leafn_lookup_for_entry+0xf6/0x3c0 [xfs]
[167596.600493]  [<ffffffffc07029b7>] xfs_dir2_leafn_lookup_int+0x17/0x30 [xfs]
[167596.601353]  [<ffffffffc06f7bb4>] xfs_da3_node_lookup_int+0x324/0x350 [xfs]
[167596.602217]  [<ffffffffc070366d>] xfs_dir2_node_lookup+0x4d/0x170 [xfs]
[167596.603074]  [<ffffffffc06fa8bd>] xfs_dir_lookup+0x1bd/0x1e0 [xfs]
[167596.603913]  [<ffffffffc072bc69>] xfs_lookup+0x69/0x140 [xfs]
[167596.604726]  [<ffffffffc0728a88>] xfs_vn_lookup+0x78/0xc0 [xfs]
[167596.605502]  [<ffffffffbc8589e3>] lookup_real+0x23/0x60
[167596.606253]  [<ffffffffbc859402>] __lookup_hash+0x42/0x60
[167596.606980]  [<ffffffffbcd7deec>] lookup_slow+0x42/0xa7
[167596.607680]  [<ffffffffbc85c5cf>] link_path_walk+0x80f/0x8b0
[167596.608359]  [<ffffffffbcd86c8f>] ? __schedule+0x3af/0x860
[167596.609014]  [<ffffffffbc85c7da>] path_lookupat+0x7a/0x8d0
[167596.609640]  [<ffffffffbcd87169>] ? schedule+0x29/0x70
[167596.610244]  [<ffffffffbc828635>] ? kmem_cache_alloc+0x35/0x1f0
[167596.610829]  [<ffffffffbc85fbcf>] ? getname_flags+0x4f/0x1a0
[167596.611407]  [<ffffffffbc85d05b>] filename_lookup+0x2b/0xc0
[167596.611980]  [<ffffffffbc860d67>] user_path_at_empty+0x67/0xc0
[167596.612544]  [<ffffffffbc7c1067>] ? mempool_free_slab+0x17/0x20
[167596.613107]  [<ffffffffbc7c147f>] ? mempool_free+0x4f/0xa0
[167596.613662]  [<ffffffffbc860dd1>] user_path_at+0x11/0x20
[167596.614217]  [<ffffffffbc8535c3>] vfs_fstatat+0x63/0xc0
[167596.614768]  [<ffffffffbc853a34>] SYSC_newfstatat+0x24/0x60
[167596.615322]  [<ffffffffbc73e6a6>] ? __audit_syscall_exit+0x1f6/0x2b0
[167596.615874]  [<ffffffffbc853e5e>] SyS_newfstatat+0xe/0x10
[167596.616426]  [<ffffffffbcd93f92>] system_call_fastpath+0x25/0x2a

It looks like all the IO thread are stuck in getting queue_lock. Then I found the suspicious lock holder.

[167596.942491] CPU: 19 PID: 115501 Comm: cas_io_7_19 Kdump: loaded Tainted: G           OE  ------------ T 3.10.0-1160.el7.x86_64 #1
[167596.944076] Hardware name: H3C R4300 G5/RS45M2C11SA, BIOS 5.41 01/07/2022
[167596.944871] task: ffff8a495a715280 ti: ffff8a495a750000 task.ti: ffff8a495a750000
[167596.945657] RIP: 0010:[<ffffffffbc717680>]  [<ffffffffbc717680>] native_queued_spin_lock_slowpath+0x1d0/0x200
[167596.946460] RSP: 0018:ffff8a4affac3ab0  EFLAGS: 00000002
[167596.947254] RAX: 0000000000000101 RBX: ffffa0e8629c0140 RCX: 0000000000000001
[167596.948038] RDX: 0000000000000101 RSI: 0000000000000001 RDI: ffffa0e8629c0150
[167596.948814] RBP: ffff8a4affac3ab0 R08: 0000000000000101 R09: ffff8a4c0936b000
[167596.949584] R10: ffff8a22d9132bc0 R11: 0000000000000000 R12: ffff8a4affac3ae0
[167596.950346] R13: ffffa0e8629c0150 R14: 0000000000000000 R15: ffffa0e86287e000
[167596.951103] FS:  0000000000000000(0000) GS:ffff8a4affac0000(0000) knlGS:0000000000000000
[167596.951863] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[167596.952631] CR2: 000000c001e68012 CR3: 0000003d0638a000 CR4: 0000000000760fe0
[167596.953404] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[167596.954166] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[167596.954912] PKRU: 00000000
[167596.955656] Call Trace:
[167596.956397]  <IRQ> \x01d [<ffffffffbcd7b37b>] queued_spin_lock_slowpath+0xb/0xf
[167596.957156]  [<ffffffffbcd89a90>] _raw_spin_lock+0x20/0x30 // compete with process-context for the same lock
[167596.957918]  [<ffffffffc08e7d1c>] ocf_async_unlock+0x3c/0xa0 [cas_cache]
[167596.958685]  [<ffffffffc08c8e1e>] ocf_mngt_cache_unlock+0x1e/0x30 [cas_cache]
[167596.959457]  [<ffffffffc08ee613>] ocf_cleaner_run_complete+0x23/0x50 [cas_cache]
[167596.960232]  [<ffffffffc08eec51>] _acp_flush_end+0xa1/0x200 [cas_cache]
[167596.961006]  [<ffffffffc08e9e20>] _ocf_cleaner_complete_req+0x50/0x60 [cas_cache]
[167596.961789]  [<ffffffffc08ea80f>] _ocf_cleaner_flush_cache_io_end+0x3f/0x60 [cas_cache]
[167596.962579]  [<ffffffffc08bcccf>] cas_bd_io_end+0x3f/0x50 [cas_cache]
[167596.963365]  [<ffffffffc08bcd44>] cas_bd_io_end_callback+0x64/0x80 [cas_cache]
[167596.964149]  [<ffffffffbc88d6bc>] bio_endio+0x8c/0x130
[167596.964924]  [<ffffffffbc9558a0>] blk_update_request+0x90/0x370
[167596.965702]  [<ffffffffbc955b9c>] blk_update_bidi_request+0x1c/0x80
[167596.966477]  [<ffffffffbc956707>] __blk_end_bidi_request+0x17/0x40
[167596.967248]  [<ffffffffbc95680f>] __blk_end_request_all+0x1f/0x30
[167596.968012]  [<ffffffffbc958f35>] blk_flush_complete_seq+0x345/0x360
[167596.968790]  [<ffffffffbc959330>] flush_end_io+0x1f0/0x2f0
[167596.969558]  [<ffffffffbc955df3>] blk_finish_request+0x83/0x130
[167596.970323]  [<ffffffffbcaed3f6>] scsi_end_request+0x116/0x1e0
[167596.971141]  [<ffffffffbcaed646>] scsi_io_completion+0x126/0x720
[167596.971895]  [<ffffffffc0282af5>] ? complete_cmd_fusion+0x5a5/0x7a0 [megaraid_sas]
[167596.972659]  [<ffffffffbcae297c>] scsi_finish_command+0xdc/0x140
[167596.973422]  [<ffffffffbcaecbd0>] scsi_softirq_done+0x130/0x160
[167596.974182]  [<ffffffffbc95d226>] blk_done_softirq+0x96/0xc0
[167596.974934]  [<ffffffffbc6a4b95>] __do_softirq+0xf5/0x280
[167596.975673]  [<ffffffffbcd974ec>] call_softirq+0x1c/0x30
[167596.976386]  [<ffffffffbc62f715>] do_softirq+0x65/0xa0
[167596.977085]  [<ffffffffbc6a4f15>] irq_exit+0x105/0x110
[167596.977787]  [<ffffffffbcd98936>] do_IRQ+0x56/0xf0
[167596.978487]  [<ffffffffbcd8a36a>] common_interrupt+0x16a/0x16a
[167596.979172]  <EOI> \x01d [<ffffffffbcd89a80>] ? _raw_spin_lock+0x10/0x30 // It doesn't disable IRQ
[167596.979840]  [<ffffffffc08e7f1b>] ? ocf_async_is_locked+0x1b/0x50 [cas_cache] 
[167596.980512]  [<ffffffffc08c8ed5>] ocf_mngt_cache_is_locked+0x15/0x20 [cas_cache]
[167596.981180]  [<ffffffffc08f0811>] ocf_lru_clean+0x91/0x360 [cas_cache]
[167596.981830]  [<ffffffffc08bcca6>] ? cas_bd_io_end+0x16/0x50 [cas_cache]
[167596.982473]  [<ffffffffc08f00e0>] ? add_lru_head_nobalance+0xc0/0xc0 [cas_cache]
[167596.983115]  [<ffffffffc08efec0>] ? cleaning_policy_acp_recovery+0x230/0x230 [cas_cache]
[167596.983759]  [<ffffffffc08f0c62>] ? ocf_lru_req_clines+0x182/0x6d0 [cas_cache]
[167596.984411]  [<ffffffffc08d513f>] ? ocf_engine_lookup_map_entry+0xbf/0x100 [cas_cache]
[167596.985070]  [<ffffffffc08f2345>] ? ocf_space_managment_remap_do+0x315/0x3c0 [cas_cache]
[167596.985734]  [<ffffffffc08d597f>] ocf_engine_prepare_clines+0x34f/0x3e0 [cas_cache]
[167596.986394]  [<ffffffffc08d672d>] ocf_write_wb+0x4d/0xf0 [cas_cache]
[167596.987048]  [<ffffffffc08e394c>] ocf_io_handle+0x3c/0x50 [cas_cache]
[167596.987694]  [<ffffffffc08e3995>] ocf_queue_run_single+0x35/0x40 [cas_cache]
[167596.988347]  [<ffffffffc08e39c8>] ocf_queue_run+0x28/0x50 [cas_cache]
[167596.989003]  [<ffffffffc08bdb9b>] _cas_io_queue_thread+0xfb/0x150 [cas_cache]
[167596.989653]  [<ffffffffbc6c6d10>] ? wake_up_atomic_t+0x30/0x30
[167596.990313]  [<ffffffffc08bdaa0>] ? cas_blk_identify_type+0x10/0x10 [cas_cache]
[167596.990970]  [<ffffffffbc6c5c21>] kthread+0xd1/0xe0
[167596.991620]  [<ffffffffbc6c5b50>] ? insert_kthread_work+0x40/0x40
[167596.992270]  [<ffffffffbcd93ddd>] ret_from_fork_nospec_begin+0x7/0x21
[167596.992917]  [<ffffffffbc6c5b50>] ? insert_kthread_work+0x40/0x40

It seems the process-context and IRQ-context cause a dead LOCK? Since the lock->waiters_lock is shared between 2 context, should we use spinlock_irqsave() or spinlock_bh() instead?


void ocf_async_unlock(struct ocf_async_lock *lock)
{
    struct list_head waiters;

    INIT_LIST_HEAD(&waiters);

    env_spinlock_lock(&lock->waiters_lock);

    ENV_BUG_ON(lock->rd);
    ENV_BUG_ON(!lock->wr);

    lock->wr = 0;

    _ocf_async_lock_collect_waiters(lock, &waiters);

    env_spinlock_unlock(&lock->waiters_lock);

    _ocf_async_lock_run_waiters(lock, &waiters, 0);
}

bool ocf_async_is_locked(struct ocf_async_lock *lock)
{
    bool locked;

    env_spinlock_lock(&lock->waiters_lock);
    locked = lock->rd || lock->wr;
    env_spinlock_unlock(&lock->waiters_lock);

    return locked;
}

Expected Behavior

Actual Behavior

Steps to Reproduce

  1. setup a few cas instances
  2. keep writing to them till DIRTY is full
  3. keep running and sees panic

Possible Fix

I changed all the spinlock() to spinlock_irqsave() in ocf/src/utils/utils_async_lock.c. but not sure this is a real fix.

Logs

full dmesg of last panic is attached

Your Environment

╔══════════════════╤═══════════╤═══════╤═════════════╗ ║ Usage statistics │ Count │ % │ Units ║ ╠══════════════════╪═══════════╪═══════╪═════════════╣ ║ Occupancy │ 234104000 │ 100.0 │ 4KiB Blocks ║ ║ Free │ 32 │ 0.0 │ 4KiB Blocks ║ ║ Clean │ 1216 │ 0.0 │ 4KiB Blocks ║ ║ Dirty │ 234102784 │ 100.0 │ 4KiB Blocks ║ ╚══════════════════╧═══════════╧═══════╧═════════════╝

╔══════════════════════╤═══════════╤═══════╤══════════╗ ║ Request statistics │ Count │ % │ Units ║ ╠══════════════════════╪═══════════╪═══════╪══════════╣ ║ Read hits │ 433982 │ 0.3 │ Requests ║ ║ Read partial misses │ 889 │ 0.0 │ Requests ║ ║ Read full misses │ 54794 │ 0.0 │ Requests ║ ║ Read total │ 489665 │ 0.3 │ Requests ║ ╟──────────────────────┼───────────┼───────┼──────────╢ ║ Write hits │ 112766652 │ 73.3 │ Requests ║ ║ Write partial misses │ 5472 │ 0.0 │ Requests ║ ║ Write full misses │ 30930034 │ 20.1 │ Requests ║ ║ Write total │ 143702158 │ 93.3 │ Requests ║ ╟──────────────────────┼───────────┼───────┼──────────╢ ║ Pass-Through reads │ 143 │ 0.0 │ Requests ║ ║ Pass-Through writes │ 9751876 │ 6.3 │ Requests ║ ║ Serviced requests │ 144191823 │ 93.7 │ Requests ║ ╟──────────────────────┼───────────┼───────┼──────────╢ ║ Total requests │ 153943842 │ 100.0 │ Requests ║ ╚══════════════════════╧═══════════╧═══════╧══════════╝

╔══════════════════════════════════╤═══════════╤═══════╤═════════════╗ ║ Block statistics │ Count │ % │ Units ║ ╠══════════════════════════════════╪═══════════╪═══════╪═════════════╣ ║ Reads from core(s) │ 277173 │ 0.1 │ 4KiB Blocks ║ ║ Writes to core(s) │ 313603620 │ 99.9 │ 4KiB Blocks ║ ║ Total to/from core(s) │ 313880793 │ 100.0 │ 4KiB Blocks ║ ╟──────────────────────────────────┼───────────┼───────┼─────────────╢ ║ Reads from cache │ 198071619 │ 20.7 │ 4KiB Blocks ║ ║ Writes to cache │ 758133245 │ 79.3 │ 4KiB Blocks ║ ║ Total to/from cache │ 956204864 │ 100.0 │ 4KiB Blocks ║ ╟──────────────────────────────────┼───────────┼───────┼─────────────╢ ║ Reads from exported object(s) │ 3031928 │ 0.3 │ 4KiB Blocks ║ ║ Writes to exported object(s) │ 876722477 │ 99.7 │ 4KiB Blocks ║ ║ Total to/from exported object(s) │ 879754405 │ 100.0 │ 4KiB Blocks ║ ╚══════════════════════════════════╧═══════════╧═══════╧═════════════╝

╔════════════════════╤═══════╤═════╤══════════╗ ║ Error statistics │ Count │ % │ Units ║ ╠════════════════════╪═══════╪═════╪══════════╣ ║ Cache read errors │ 0 │ 0.0 │ Requests ║ ║ Cache write errors │ 0 │ 0.0 │ Requests ║ ║ Cache total errors │ 0 │ 0.0 │ Requests ║ ╟────────────────────┼───────┼─────┼──────────╢ ║ Core read errors │ 0 │ 0.0 │ Requests ║ ║ Core write errors │ 0 │ 0.0 │ Requests ║ ║ Core total errors │ 0 │ 0.0 │ Requests ║ ╟────────────────────┼───────┼─────┼──────────╢ ║ Total errors │ 0 │ 0.0 │ Requests ║ ╚════════════════════╧═══════╧═════╧══════════╝ SIT [root@sz01bla1269test ~]# casadm -P -i 25 Cache Id 25 Cache Size 56189280 [4KiB Blocks] / 214.35 [GiB] Cache Device /dev/sdf15 Exported Object - Core Devices 1 Inactive Core Devices 0 Write Policy wb Cleaning Policy acp Promotion Policy always Cache line size 64 [KiB] Metadata Memory Footprint 276.6 [MiB] Dirty for 88775 [s] / 1 [d] 39 [m] 35 [s] Status Running

╔══════════════════╤══════════╤═══════╤═════════════╗ ║ Usage statistics │ Count │ % │ Units ║ ╠══════════════════╪══════════╪═══════╪═════════════╣ ║ Occupancy │ 56189264 │ 100.0 │ 4KiB Blocks ║ ║ Free │ 16 │ 0.0 │ 4KiB Blocks ║ ║ Clean │ 5447840 │ 9.7 │ 4KiB Blocks ║ ║ Dirty │ 50741424 │ 90.3 │ 4KiB Blocks ║ ╚══════════════════╧══════════╧═══════╧═════════════╝

╔══════════════════════╤══════════╤═══════╤══════════╗ ║ Request statistics │ Count │ % │ Units ║ ╠══════════════════════╪══════════╪═══════╪══════════╣ ║ Read hits │ 163537 │ 0.3 │ Requests ║ ║ Read partial misses │ 4 │ 0.0 │ Requests ║ ║ Read full misses │ 1340 │ 0.0 │ Requests ║ ║ Read total │ 164881 │ 0.4 │ Requests ║ ╟──────────────────────┼──────────┼───────┼──────────╢ ║ Write hits │ 35817225 │ 76.3 │ Requests ║ ║ Write partial misses │ 666 │ 0.0 │ Requests ║ ║ Write full misses │ 10611184 │ 22.6 │ Requests ║ ║ Write total │ 46429075 │ 98.9 │ Requests ║ ╟──────────────────────┼──────────┼───────┼──────────╢ ║ Pass-Through reads │ 0 │ 0.0 │ Requests ║ ║ Pass-Through writes │ 329045 │ 0.7 │ Requests ║ ║ Serviced requests │ 46593956 │ 99.3 │ Requests ║ ╟──────────────────────┼──────────┼───────┼──────────╢ ║ Total requests │ 46923001 │ 100.0 │ Requests ║ ╚══════════════════════╧══════════╧═══════╧══════════╝

╔══════════════════════════════════╤═══════════╤═══════╤═════════════╗ ║ Block statistics │ Count │ % │ Units ║ ╠══════════════════════════════════╪═══════════╪═══════╪═════════════╣ ║ Reads from core(s) │ 10385 │ 0.0 │ 4KiB Blocks ║ ║ Writes to core(s) │ 101331247 │ 100.0 │ 4KiB Blocks ║ ║ Total to/from core(s) │ 101341632 │ 100.0 │ 4KiB Blocks ║ ╟──────────────────────────────────┼───────────┼───────┼─────────────╢ ║ Reads from cache │ 100243009 │ 28.0 │ 4KiB Blocks ║ ║ Writes to cache │ 257151669 │ 72.0 │ 4KiB Blocks ║ ║ Total to/from cache │ 357394678 │ 100.0 │ 4KiB Blocks ║ ╟──────────────────────────────────┼───────────┼───────┼─────────────╢ ║ Reads from exported object(s) │ 988786 │ 0.4 │ 4KiB Blocks ║ ║ Writes to exported object(s) │ 259636373 │ 99.6 │ 4KiB Blocks ║ ║ Total to/from exported object(s) │ 260625159 │ 100.0 │ 4KiB Blocks ║ ╚══════════════════════════════════╧═══════════╧═══════╧═════════════╝

╔════════════════════╤═══════╤═════╤══════════╗ ║ Error statistics │ Count │ % │ Units ║ ╠════════════════════╪═══════╪═════╪══════════╣ ║ Cache read errors │ 0 │ 0.0 │ Requests ║ ║ Cache write errors │ 0 │ 0.0 │ Requests ║ ║ Cache total errors │ 0 │ 0.0 │ Requests ║ ╟────────────────────┼───────┼─────┼──────────╢ ║ Core read errors │ 0 │ 0.0 │ Requests ║ ║ Core write errors │ 0 │ 0.0 │ Requests ║ ║ Core total errors │ 0 │ 0.0 │ Requests ║ ╟────────────────────┼───────┼─────┼──────────╢ ║ Total errors │ 0 │ 0.0 │ Requests ║ ╚════════════════════╧═══════╧═════╧══════════╝