Open fengggli opened 5 years ago
Test output (following the instructions)
(base) fengggli@ribbit5(:):~/WorkSpace/dpdk905/dpdk-19.05$sudo ./build/app/testpmd -- --mp-alloc xmem
EAL: Detected 48 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:06:00.1 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:06:00.2 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:06:00.3 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
testpmd: No probed ethernet devices
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=523456, size=2176, socket=0
testpmd: Allocated 2172MB of external memory
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=523456, size=2176, socket=1
testpmd: Allocated 2172MB of external memory
testpmd: preferred mempool ops selected: ring_mp_mc
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support enabled, MP allocation mode: xmem
io packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=0
Press enter to exit
Telling cores to stop...
Waiting for lcores to finish...
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
Bye...
I don't necessarily need dpdk at this stage, I could just run mcas/testcl.cc in the virtualmachine instead(https://spdk.io/doc/env_8h.html#a0874731c44ac31e4b14d91c6844a87d1)
I could use get_user_pages?(https://lwn.net/Articles/753027/, http://nuncaalaprimera.com/2014/using-hugepage-backed-buffers-in-linux-kernel-driver)
What I could do is add a iocontrol in mcas (couldn't use mmap here, since directly mmap to non-hugepagefile(file which use the exact same fileoperations) will result unsuccessful mmap (https://elixir.bootlin.com/linux/v4.15.18/source/mm/mmap.c#L1507))
Figure out how spdk block abstraction interacts with libpmemblk (where is the iomem?) Steps:
pmap the test-hugepage-shm:
py36) lifen@sievert(:):~/Workspace/vagrantvm/vagrant-ubuntu18-spdk1810/linux-hwe-4.15.0$pmap 25474
25474: ./src/components/store/nvmestore/testing/./test-hugepage-shm
0000000000400000 4K r-x-- test-hugepage-shm
0000000000600000 4K r---- test-hugepage-shm
0000000000601000 4K rw--- test-hugepage-shm
00000000017fc000 132K rw--- [ anon ]
00007f1cbac00000 262144K rw-s- SYSV00000002 (deleted)
test-mcas-nvmestore pmap: (they are stored in /dev/hugepages)
...
00007f0b8ce00000 2048K rw-s- nvme_comanchemap_32768
00007f0b8d000000 2048K rw-s- nvme_comanchemap_32769
00007f0b8d200000 2048K rw-s- nvme_comanchemap_32770
00007f0b8d400000 2048K rw-s- nvme_comanchemap_32771
00007f0b8d600000 2048K rw-s- nvme_comanchemap_32772
00007f0b8d800000 2048K rw-s- nvme_comanchemap_32773
...
* In name is passed in src/lib/core/src/dpdk.cpp then passed to dpdk
Tracks how to register external memory rte.