Closed xxks-kkk closed 5 years ago
Hi @xxks-kkk,
How much hugepage memory did you allocate in your VM? I see "-m 512" on the QEMU command line so I'm guessing you had pretty limited hugepage memory.
Could you try allocating more hugepage memory? Or override the size of the bdev_io_pool but putting the following in bdev.conf in the same directory from where you are running the hello_bdev app:
[Bdev] BdevIoPoolSize 1024
This would reduce the number of bdev_ios in the global pool from default 64K to only 1K.
Note: it might be worth override this default in the hello_bdev application itself. Even if the submitter confirms that increasing the hugepage memory works, let's keep this issue open for discussion.
-Jim
@jimharris
Thanks for the quick response. Tried
[Bdev]
BdevIoPoolSize 1024
with default QEMU setup, doesn't work. However, after I increase -m
value of QEMU to 4096
and run sudo scripts/setup.sh
without HUGEPAGES=
, everything works.
@xxks-kkk We can't reproduce this issue. I set "-m 512" and "-m 256", both are successful with some error messages. It is recommended to allocate a bit more memory at initialization time. Maybe you don't have enough memory left. Thanks.
./qemu-system-x86_64 -cpu host -smp 8 -m 512 -object memory-backend-file,id=mem,size=512m,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -drive file=/home/shenfurong/Fedora26.qcow2,if=none,id=disk -device ide-hd,drive=disk,bootindex=0 -net user,hostfwd=tcp::10000-:22 -net nic --enable-kvm -drive format=raw,file=/root/test.img,if=none,id=nvmedrive -device nvme,drive=nvmedrive,serial=1234
[root@localhost spdk]# ./scripts/setup.sh
0000:00:04.0 (8086 5845): nvme -> uio_pci_generic
[root@localhost spdk]# ./examples/nvme/hello_world/hello_world
Starting DPDK 17.05.0 initialization...
[ DPDK EAL parameters: hello_world -c 0x1 --file-prefix=spdk0 --base-virtaddr=0x1000000000 --proc-type=auto ]
EAL: Detected 8 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
Initializing NVMe Controllers
EAL: PCI device 0000:00:04.0 on NUMA socket 0
EAL: probe driver: 8086:5845 spdk_nvme
Attaching to 0000:00:04.0
nvme_qpair.c: 112:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES (09) sqid:0 cid:63 nsid:0 cdw10:0000000b cdw11:0000001f
nvme_qpair.c: 284:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:0 cid:63 cdw0:0 sqhd:0005 p:1 m:0 dnr:1
nvme_ctrlr.c: 952:nvme_ctrlr_configure_aer: *ERROR*: nvme_ctrlr_cmd_set_async_event_config failed!
nvme_qpair.c: 112:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) sqid:0 cid:63 nsid:ffffffff cdw10:007f00c0 cdw11:00000000
nvme_qpair.c: 284:nvme_qpair_print_completion: *NOTICE*: INVALID OPCODE (00/01) sqid:0 cid:63 cdw0:0 sqhd:0006 p:1 m:0 dnr:1
nvme_ctrlr.c: 352:nvme_ctrlr_set_intel_support_log_pages: *ERROR*: nvme_ctrlr_cmd_get_log_page failed!
Attached to 0000:00:04.0
Using controller QEMU NVMe Ctrl (1234 ) with 1 namespaces.
Namespace ID: 1 size: 1GB
Initialization complete.
Hello world!
Closing this issue - VM needed more memory. Submitter confirmed that adding more memory to the VM fixed the issue.
I did look at the possibility of reducing the size of the bdev_io pool - but that does little to reduce the memory consumption.
Expected Behavior
I try to run SPDK in a QEMU with simulated NVMe device. I can successfully build and rebind the driver. However, when I run
hello_bdev
usingsudo ./hello_bdev
, I hit the following errorPossible Solution
Steps to Reproduce
Context (Environment including OS version, SPDK version, etc.)