spdk / spdk

Storage Performance Development Kit
https://spdk.io/
Other
3.06k stars 1.2k forks source link

[nvmf_host_management] Unresponsive nvmf_tgt leading to a job timeout #3406

Open mikeBashStuff opened 4 months ago

mikeBashStuff commented 4 months ago

CI Intermittent Failure

https://ci.spdk.io/results/autotest-nightly-lts/builds/1916/archive/nvmf-cvl-phy-autotest_3208/index.html

04:23:56  START TEST nvmf_host_management
04:23:56  ************************************
04:23:56   04:23:55 -- common/autotest_common.sh@1104 -- # nvmf_host_management
04:23:56   04:23:55 -- target/host_management.sh@69 -- # starttarget
04:23:56   04:23:55 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E
04:23:56   04:23:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt
04:23:56   04:23:55 -- common/autotest_common.sh@712 -- # xtrace_disable
04:23:56   04:23:55 -- common/autotest_common.sh@10 -- # set +x
04:23:56   04:23:55 -- nvmf/common.sh@469 -- # nvmfpid=3008122
04:23:56   04:23:55 -- nvmf/common.sh@470 -- # waitforlisten 3008122
04:23:56   04:23:55 -- common/autotest_common.sh@819 -- # '[' -z 3008122 ']'
04:23:56   04:23:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock
04:23:56   04:23:55 -- common/autotest_common.sh@824 -- # local max_retries=100
04:23:56   04:23:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
04:23:56  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
04:23:56   04:23:55 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
04:23:56   04:23:55 -- common/autotest_common.sh@828 -- # xtrace_disable
04:23:56   04:23:55 -- common/autotest_common.sh@10 -- # set +x
04:23:56  [2024-06-12 04:23:55.539849] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization...
04:23:56  [2024-06-12 04:23:55.539893] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
04:23:56  EAL: No free 2048 kB hugepages reported on node 1
04:23:56  [2024-06-12 04:23:55.600120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
04:23:56  [2024-06-12 04:23:55.675860] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
04:23:56  [2024-06-12 04:23:55.675962] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
04:23:56  [2024-06-12 04:23:55.675970] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
04:23:56  [2024-06-12 04:23:55.675976] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
04:23:56  [2024-06-12 04:23:55.676010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
04:23:56  [2024-06-12 04:23:55.676030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
04:23:56  [2024-06-12 04:23:55.676140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
04:23:56  [2024-06-12 04:23:55.676142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
04:23:56   04:23:56 -- common/autotest_common.sh@848 -- # (( i == 0 ))
04:23:56   04:23:56 -- common/autotest_common.sh@852 -- # return 0
04:23:56   04:23:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt
04:23:56   04:23:56 -- common/autotest_common.sh@718 -- # xtrace_disable
04:23:56   04:23:56 -- common/autotest_common.sh@10 -- # set +x
04:23:56   04:23:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
04:23:56   04:23:56 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192
04:23:56   04:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable
04:23:56   04:23:56 -- common/autotest_common.sh@10 -- # set +x
04:23:56  [2024-06-12 04:23:56.403742] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0xbefaf0/0xbef130) succeed.
04:23:56  [2024-06-12 04:23:56.413649] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0xbf0e60/0xbef6b0) succeed.
04:23:56  [2024-06-12 04:23:56.413670] rdma.c:2843:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576
04:23:56   04:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]]
04:23:56   04:23:56 -- target/host_management.sh@20 -- # timing_enter create_subsystem
04:23:56   04:23:56 -- common/autotest_common.sh@712 -- # xtrace_disable
04:23:56   04:23:56 -- common/autotest_common.sh@10 -- # set +x
04:23:56   04:23:56 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt
04:23:56   04:23:56 -- target/host_management.sh@23 -- # cat
04:23:56   04:23:56 -- target/host_management.sh@30 -- # rpc_cmd
04:23:56   04:23:56 -- common/autotest_common.sh@551 -- # xtrace_disable
04:23:56   04:23:56 -- common/autotest_common.sh@10 -- # set +x
04:23:56  Malloc0
04:23:56  [2024-06-12 04:23:56.476561] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 ***
04:23:56   04:23:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]]
04:23:56   04:23:56 -- target/host_management.sh@31 -- # timing_exit create_subsystems
04:23:56   04:23:56 -- common/autotest_common.sh@718 -- # xtrace_disable
04:23:56   04:23:56 -- common/autotest_common.sh@10 -- # set +x
04:23:56   04:23:56 -- target/host_management.sh@73 -- # perfpid=3008387
04:23:56   04:23:56 -- target/host_management.sh@74 -- # waitforlisten 3008387 /var/tmp/bdevperf.sock
04:23:56   04:23:56 -- common/autotest_common.sh@819 -- # '[' -z 3008387 ']'
04:23:56   04:23:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock
04:23:56   04:23:56 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10
04:23:56   04:23:56 -- common/autotest_common.sh@824 -- # local max_retries=100
04:23:56    04:23:56    -- target/host_management.sh@72 -- # gen_nvmf_target_json 0
04:23:56   04:23:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
04:23:56  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
04:23:56    04:23:56    -- nvmf/common.sh@520 -- # config=()
04:23:56   04:23:56 -- common/autotest_common.sh@828 -- # xtrace_disable
04:23:56    04:23:56    -- nvmf/common.sh@520 -- # local subsystem config
04:23:56   04:23:56 -- common/autotest_common.sh@10 -- # set +x
04:23:56    04:23:56    -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
04:23:56    04:23:56    -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
04:23:56  {
04:23:56    "params": {
04:23:56      "name": "Nvme$subsystem",
04:23:56      "trtype": "$TEST_TRANSPORT",
04:23:56      "traddr": "$NVMF_FIRST_TARGET_IP",
04:23:56      "adrfam": "ipv4",
04:23:56      "trsvcid": "$NVMF_PORT",
04:23:56      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
04:23:56      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
04:23:56      "hdgst": ${hdgst:-false},
04:23:56      "ddgst": ${ddgst:-false}
04:23:56    },
04:23:56    "method": "bdev_nvme_attach_controller"
04:23:56  }
04:23:56  EOF
04:23:56  )")
04:23:56     04:23:56   -- nvmf/common.sh@542 -- # cat
04:23:56    04:23:56    -- nvmf/common.sh@544 -- # jq .
04:23:56     04:23:56   -- nvmf/common.sh@545 -- # IFS=,
04:23:56     04:23:56   -- nvmf/common.sh@546 -- # printf '%s\n' '{
04:23:56    "params": {
04:23:56      "name": "Nvme0",
04:23:56      "trtype": "rdma",
04:23:56      "traddr": "192.168.100.8",
04:23:56      "adrfam": "ipv4",
04:23:56      "trsvcid": "4420",
04:23:56      "subnqn": "nqn.2016-06.io.spdk:cnode0",
04:23:56      "hostnqn": "nqn.2016-06.io.spdk:host0",
04:23:56      "hdgst": false,
04:23:56      "ddgst": false
04:23:56    },
04:23:56    "method": "bdev_nvme_attach_controller"
04:23:56  }'
04:23:56  [2024-06-12 04:23:56.567118] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization...
04:23:56  [2024-06-12 04:23:56.567163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3008387 ]
04:23:56  EAL: No free 2048 kB hugepages reported on node 1
04:23:56  [2024-06-12 04:23:56.626970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
04:23:56  [2024-06-12 04:23:56.696776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
04:23:56  Running I/O for 10 seconds...
04:23:57   04:23:57 -- common/autotest_common.sh@848 -- # (( i == 0 ))
04:23:57   04:23:57 -- common/autotest_common.sh@852 -- # return 0
04:23:57   04:23:57 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
04:23:57   04:23:57 -- common/autotest_common.sh@551 -- # xtrace_disable
04:23:57   04:23:57 -- common/autotest_common.sh@10 -- # set +x
04:23:57   04:23:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]]
04:23:57   04:23:57 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
04:23:57   04:23:57 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1
04:23:57   04:23:57 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']'
04:23:57   04:23:57 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']'
04:23:57   04:23:57 -- target/host_management.sh@52 -- # local ret=1
04:23:57   04:23:57 -- target/host_management.sh@53 -- # local i
04:23:57   04:23:57 -- target/host_management.sh@54 -- # (( i = 10 ))
04:23:57   04:23:57 -- target/host_management.sh@54 -- # (( i != 0 ))
04:23:57    04:23:57    -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1
04:23:57    04:23:57    -- common/autotest_common.sh@551 -- # xtrace_disable
04:23:57    04:23:57    -- common/autotest_common.sh@10 -- # set +x
04:23:57    04:23:57    -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops'
04:23:57    04:23:57    -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]]
04:23:57   04:23:57 -- target/host_management.sh@55 -- # read_io_count=3036
04:23:57   04:23:57 -- target/host_management.sh@58 -- # '[' 3036 -ge 100 ']'
04:23:57   04:23:57 -- target/host_management.sh@59 -- # ret=0
04:23:57   04:23:57 -- target/host_management.sh@60 -- # break
04:23:57   04:23:57 -- target/host_management.sh@64 -- # return 0
04:23:57   04:23:57 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
04:23:57   04:23:57 -- common/autotest_common.sh@551 -- # xtrace_disable
04:23:57   04:23:57 -- common/autotest_common.sh@10 -- # set +x
04:23:57   04:23:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]]
04:23:57   04:23:57 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
04:23:57   04:23:57 -- common/autotest_common.sh@551 -- # xtrace_disable
04:23:57   04:23:57 -- common/autotest_common.sh@10 -- # set +x
04:23:57   04:23:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]]
04:23:57   04:23:57 -- target/host_management.sh@87 -- # sleep 1
04:23:58  [2024-06-12 04:23:57.955752] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0)
04:23:58  [2024-06-12 04:23:57.955787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x4547e630
04:23:58  [2024-06-12 04:23:57.955796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.955811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x4547e630
04:23:58  [2024-06-12 04:23:57.955818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.955827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0xb27240af
04:23:58  [2024-06-12 04:23:57.955833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.955842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0xd045fa31
04:23:58  [2024-06-12 04:23:57.955853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.955861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0xb27240af
04:23:58  [2024-06-12 04:23:57.955868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.955876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x4547e630
04:23:58  [2024-06-12 04:23:57.955883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.955891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0xd045fa31
04:23:58  [2024-06-12 04:23:57.955897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.955906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x78425832
04:23:58  [2024-06-12 04:23:57.955913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.955921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0xb27240af
04:23:58  [2024-06-12 04:23:57.955928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.955936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x3b2935e9
04:23:58  [2024-06-12 04:23:57.955942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.955950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0xd045fa31
04:23:58  [2024-06-12 04:23:57.955957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.955966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x4547e630
04:23:58  [2024-06-12 04:23:57.955973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.955981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0xb27240af
04:23:58  [2024-06-12 04:23:57.955988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.955996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0xd045fa31
04:23:58  [2024-06-12 04:23:57.956004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x4547e630
04:23:58  [2024-06-12 04:23:57.956023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x3b2935e9
04:23:58  [2024-06-12 04:23:57.956038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0xd045fa31
04:23:58  [2024-06-12 04:23:57.956053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x78425832
04:23:58  [2024-06-12 04:23:57.956068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0xd045fa31
04:23:58  [2024-06-12 04:23:57.956084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0xb27240af
04:23:58  [2024-06-12 04:23:57.956101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3ba000 len:0x10000 key:0x6891e35b
04:23:58  [2024-06-12 04:23:57.956117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c399000 len:0x10000 key:0x6891e35b
04:23:58  [2024-06-12 04:23:57.956132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c378000 len:0x10000 key:0x6891e35b
04:23:58  [2024-06-12 04:23:57.956149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c357000 len:0x10000 key:0x6891e35b
04:23:58  [2024-06-12 04:23:57.956165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfdc000 len:0x10000 key:0x6891e35b
04:23:58  [2024-06-12 04:23:57.956180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0xd045fa31
04:23:58  [2024-06-12 04:23:57.956197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x4547e630
04:23:58  [2024-06-12 04:23:57.956212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x78425832
04:23:58  [2024-06-12 04:23:57.956226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x78425832
04:23:58  [2024-06-12 04:23:57.956241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x4547e630
04:23:58  [2024-06-12 04:23:57.956256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x4547e630
04:23:58  [2024-06-12 04:23:57.956272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x78425832
04:23:58  [2024-06-12 04:23:57.956287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0xd045fa31
04:23:58  [2024-06-12 04:23:57.956302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0xb27240af
04:23:58  [2024-06-12 04:23:57.956317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x3b2935e9
04:23:58  [2024-06-12 04:23:57.956332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0xb27240af
04:23:58  [2024-06-12 04:23:57.956347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0xd045fa31
04:23:58  [2024-06-12 04:23:57.956362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0xd045fa31
04:23:58  [2024-06-12 04:23:57.956377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x78425832
04:23:58  [2024-06-12 04:23:57.956391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0xb27240af
04:23:58  [2024-06-12 04:23:57.956406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0xd045fa31
04:23:58  [2024-06-12 04:23:57.956420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0xd045fa31
04:23:58  [2024-06-12 04:23:57.956436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0xd045fa31
04:23:58  [2024-06-12 04:23:57.956450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x4547e630
04:23:58  [2024-06-12 04:23:57.956465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0xb27240af
04:23:58  [2024-06-12 04:23:57.956479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x78425832
04:23:58  [2024-06-12 04:23:57.956494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0xb27240af
04:23:58  [2024-06-12 04:23:57.956508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0xb27240af
04:23:58  [2024-06-12 04:23:57.956523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x4547e630
04:23:58  [2024-06-12 04:23:57.956542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0xb27240af
04:23:58  [2024-06-12 04:23:57.956556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0xd045fa31
04:23:58  [2024-06-12 04:23:57.956571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x3b2935e9
04:23:58  [2024-06-12 04:23:57.956588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0xb27240af
04:23:58  [2024-06-12 04:23:57.956602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x78425832
04:23:58  [2024-06-12 04:23:57.956617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x78425832
04:23:58  [2024-06-12 04:23:57.956631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x4547e630
04:23:58  [2024-06-12 04:23:57.956647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0xb27240af
04:23:58  [2024-06-12 04:23:57.956661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x78425832
04:23:58  [2024-06-12 04:23:57.956675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x4547e630
04:23:58  [2024-06-12 04:23:57.956690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0xb27240af
04:23:58  [2024-06-12 04:23:57.956706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x78425832
04:23:58  [2024-06-12 04:23:57.956720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x78425832
04:23:58  [2024-06-12 04:23:57.956739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x3b2935e9
04:23:58  [2024-06-12 04:23:57.956756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.956764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x78425832
04:23:58  [2024-06-12 04:23:57.956771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1e916f0 sqhd:15c0 p:0 m:0 dnr:0
04:23:58  [2024-06-12 04:23:57.957143] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller.
04:23:58  [2024-06-12 04:23:57.958029] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller
04:23:58  task offset: 27008 on job bdev=Nvme0n1 fails
04:23:58  
04:23:58                                                                                                  Latency(us)
04:23:58  
 Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
04:23:58  
 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
04:23:58  
 Job: Nvme0n1 ended in about 1.10 seconds with error
04:23:58     Verification LBA range: start 0x0 length 0x400
04:23:58     Nvme0n1             :       1.10    3013.37     188.34      58.35     0.00   20639.47    3058.35  531278.51
04:23:58  
 ===================================================================================================================
04:23:58  
 Total                       :               3013.37     188.34      58.35     0.00   20639.47    3058.35  531278.51
04:23:58  [2024-06-12 04:23:57.959723] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
04:23:58  [2024-06-12 04:23:57.959740] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0)
04:23:58  [2024-06-12 04:23:57.973092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0
04:23:58  [2024-06-12 04:23:57.989188] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
04:23:58   04:23:58 -- target/host_management.sh@91 -- # kill -9 3008387
04:23:58  /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3008387) - No such process
04:23:58   04:23:58 -- target/host_management.sh@91 -- # true
04:23:58   04:23:58 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004
04:23:58   04:23:58 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1
04:23:58    04:23:58    -- target/host_management.sh@100 -- # gen_nvmf_target_json 0
04:23:58    04:23:58    -- nvmf/common.sh@520 -- # config=()
04:23:58    04:23:58    -- nvmf/common.sh@520 -- # local subsystem config
04:23:58    04:23:58    -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}"
04:23:58    04:23:58    -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF
04:23:58  {
04:23:58    "params": {
04:23:58      "name": "Nvme$subsystem",
04:23:58      "trtype": "$TEST_TRANSPORT",
04:23:58      "traddr": "$NVMF_FIRST_TARGET_IP",
04:23:58      "adrfam": "ipv4",
04:23:58      "trsvcid": "$NVMF_PORT",
04:23:58      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
04:23:58      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
04:23:58      "hdgst": ${hdgst:-false},
04:23:58      "ddgst": ${ddgst:-false}
04:23:58    },
04:23:58    "method": "bdev_nvme_attach_controller"
04:23:58  }
04:23:58  EOF
04:23:58  )")
04:23:58     04:23:58   -- nvmf/common.sh@542 -- # cat
04:23:58    04:23:58    -- nvmf/common.sh@544 -- # jq .
04:23:58     04:23:58   -- nvmf/common.sh@545 -- # IFS=,
04:23:58     04:23:58   -- nvmf/common.sh@546 -- # printf '%s\n' '{
04:23:58    "params": {
04:23:58      "name": "Nvme0",
04:23:58      "trtype": "rdma",
04:23:58      "traddr": "192.168.100.8",
04:23:58      "adrfam": "ipv4",
04:23:58      "trsvcid": "4420",
04:23:58      "subnqn": "nqn.2016-06.io.spdk:cnode0",
04:23:58      "hostnqn": "nqn.2016-06.io.spdk:host0",
04:23:58      "hdgst": false,
04:23:58      "ddgst": false
04:23:58    },
04:23:58    "method": "bdev_nvme_attach_controller"
04:23:58  }'
04:23:58  [2024-06-12 04:23:58.495892] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization...
04:23:58  [2024-06-12 04:23:58.495936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3008641 ]
04:23:58  EAL: No free 2048 kB hugepages reported on node 1
04:23:58  [2024-06-12 04:23:58.556138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
04:23:58  [2024-06-12 04:23:58.622513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
04:23:58  Running I/O for 1 seconds...
04:23:59  
04:23:59                                                                                                  Latency(us)
04:23:59  
 Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
04:23:59  
 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
04:23:59     Verification LBA range: start 0x0 length 0x400
04:23:59     Nvme0n1             :       1.01    5786.38     361.65       0.00     0.00   10882.91    1014.25   25839.91
04:23:59  
 ===================================================================================================================
04:23:59  
 Total                       :               5786.38     361.65       0.00     0.00   10882.91    1014.25   25839.91
04:24:00   04:24:00 -- target/host_management.sh@101 -- # stoptarget
04:24:00   04:24:00 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state
04:24:00   04:24:00 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
04:24:00   04:24:00 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt
04:24:00   04:24:00 -- target/host_management.sh@40 -- # nvmftestfini
04:24:00   04:24:00 -- nvmf/common.sh@476 -- # nvmfcleanup
04:24:00   04:24:00 -- nvmf/common.sh@116 -- # sync
04:24:00   04:24:00 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']'
04:24:00   04:24:00 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']'
04:24:00   04:24:00 -- nvmf/common.sh@119 -- # set +e
04:24:00   04:24:00 -- nvmf/common.sh@120 -- # for i in {1..20}
04:24:00   04:24:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma
04:24:00  rmmod nvme_rdma
04:24:00  rmmod nvme_fabrics
04:24:00   04:24:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics
04:24:00   04:24:00 -- nvmf/common.sh@123 -- # set -e
04:24:00   04:24:00 -- nvmf/common.sh@124 -- # return 0
04:24:00   04:24:00 -- nvmf/common.sh@477 -- # '[' -n 3008122 ']'
04:24:00   04:24:00 -- nvmf/common.sh@478 -- # killprocess 3008122
04:24:00   04:24:00 -- common/autotest_common.sh@926 -- # '[' -z 3008122 ']'
04:24:00   04:24:00 -- common/autotest_common.sh@930 -- # kill -0 3008122
04:24:00    04:24:00    -- common/autotest_common.sh@931 -- # uname
04:24:00   04:24:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']'
04:24:00    04:24:00    -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3008122
04:24:00   04:24:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1
04:24:00   04:24:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']'
04:24:00   04:24:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3008122'
04:24:00  killing process with pid 3008122
04:24:00   04:24:00 -- common/autotest_common.sh@945 -- # kill 3008122
04:24:00   04:24:00 -- common/autotest_common.sh@950 -- # wait 3008122
04:49:50  Cancelling nested steps due to timeout
04:49:50  Sending interrupt signal to process
04:50:07  Terminated
spdkci commented 4 months ago

Another instance of this failure. Reported by @spdkci / Known Issue Detector. log: https://ci.spdk.io/public_build/autotest-nightly-lts_1957.html

spdkci commented 3 months ago

Another instance of this failure. Reported by @spdkci / Known Issue Detector. log: https://ci.spdk.io/public_build/autotest-nightly-lts_1980.html

spdkci commented 2 months ago

Another instance of this failure. Reported by @spdkci / Known Issue Detector. log: https://ci.spdk.io/public_build/autotest-nightly-lts_2067.html

spdkci commented 2 months ago

Another instance of this failure. Reported by @spdkci / Known Issue Detector. log: https://ci.spdk.io/public_build/autotest-nightly_3955.html

spdkci commented 2 months ago

Another instance of this failure. Reported by @spdkci / Known Issue Detector. log: https://ci.spdk.io/public_build/autotest-nightly-lts_2080.html

spdkci commented 2 months ago

Another instance of this failure. Reported by @spdkci / Known Issue Detector. log: https://ci.spdk.io/public_build/autotest-nightly-lts_2112.html

spdkci commented 2 months ago

Another instance of this failure. Reported by @spdkci / Known Issue Detector. log: https://ci.spdk.io/public_build/autotest-nightly_4001.html

spdkci commented 2 months ago

Another instance of this failure. Reported by @spdkci / Known Issue Detector. log: https://ci.spdk.io/public_build/autotest-nightly_4006.html

spdkci commented 1 month ago

Another instance of this failure. Reported by @spdkci / Known Issue Detector. log: https://ci.spdk.io/public_build/autotest-nightly-lts_2172.html

spdkci commented 1 week ago

Another instance of this failure. Reported by @spdkci / Known Issue Detector. log: https://ci.spdk.io/public_build/autotest-nightly_4193.html

spdkci commented 17 hours ago

Another instance of this failure. Reported by @spdkci / Known Issue Detector. log: https://ci.spdk.io/public_build/autotest-nightly_4217.html