pinggit / dpdk-contrail-book

contrail dpdk day one book
9 stars 3 forks source link

again verify PMD #12

Closed pinggit closed 3 years ago

pinggit commented 4 years ago

this is to revisit #1 after a 2nd read of ch2. p36

DPDK supported NICs DPDK Library includes Poll Mode Drivers (PMDs) for physical and emulated Ethernet controllers which are designed to work without asynchronous, interrupt-based signaling mechanisms.

o Available DPDK PMD for physical NIC: o I40e PMD for Intel X710/XL710/X722 10/40 Gbps family of adapters http://dpdk.org/doc/guides/nics/i40e.html o IXGBE PMD http://dpdk.org/doc/guides/nics/ixgbe.html o Linux bonding PMD http://dpdk.org/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.html

in a nut shell, in my dpdk server, how to check the PMD "user space process"? is it:

ldurandadomia commented 4 years ago

DPDK tools is allowing to check driver used: Then using dpdk_nic_bind tool, we can see which PMD driver is used by DPDK:

# docker exec -it contrail-vrouter-agent-dpdk bash
$ /opt/contrail/bin/dpdk_nic_bind.py -s
Network devices using DPDK-compatible driver
============================================
0000:02:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=uio_pci_generic unused=ixgbe
0000:02:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=uio_pci_generic unused=ixgbe

Also lspci:

$ lspci -v | more
...
02:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
        Subsystem: Super Micro Computer Inc AOC-STGN-I2S [REV 1.01]
        Physical Slot: 7
        Flags: bus master, fast devsel, latency 0, IRQ 54, NUMA node 0
        Memory at c7800000 (64-bit, prefetchable) [size=512K]
        I/O ports at 6000 [size=32]
        Memory at c7d00000 (64-bit, prefetchable) [size=16K]
        Expansion ROM at c7200000 [disabled] [size=512K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable- Count=64 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [e0] Vital Product Data
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 0c-c4-7a-ff-ff-b7-2c-f8
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
        Kernel driver in use: uio_pci_generic
        Kernel modules: ixgbe

Here, uio_pci_generic is the diver used (to expose cards register). We can also conclude, PMD is ixbge (which is the regular driver used by the card). PMD is provided by DPDK.

ldurandadomia commented 4 years ago

Also, PMD is probably loaded by DPDK application. In the vRouter we should see it in DPDK logs (PMD is probably linux bond PMD).

2019-10-07 13:34:42,549 VROUTER:        --vdev  "eth_bond_bond0,mode=4,xmit_policy=l23,socket_id=0,mac=0c:c4:7a:b7:2c:f8,lacp_rate=1,**slave=0000:02:00.0,slave=0000:02:00.1**"
2019-10-07 13:34:42,549 VROUTER:      --lcores  "(0-2)@(0-39),(8-9)@(0-39),10@2,11@3,12@22,13@23"
2019-10-07 13:34:42,555 EAL: Detected 40 lcore(s)
2019-10-07 13:34:42,555 EAL: Detected 2 NUMA nodes
2019-10-07 13:34:42,555 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
2019-10-07 13:34:42,588 EAL: Probing VFIO support...
**2019-10-07 13:34:43,127 EAL: PCI device 0000:02:00.0 on NUMA socket 0
2019-10-07 13:34:43,127 EAL:   probe driver: 8086:10fb net_ixgbe
2019-10-07 13:34:43,263 EAL: PCI device 0000:02:00.1 on NUMA socket 0
2019-10-07 13:34:43,263 EAL:   probe driver: 8086:10fb net_ixgbe**

To be discussed with Kiran.

pinggit commented 4 years ago

the confusion part is not these commands, it is: "who on earth is the user PMD" and "who is the kernel enablers"? most text mentioned PMD as just a user process, but all commands above gives the kernel stuff (enablers). that is the most confusing part that needs to be clarified.

in this doc: http://doc.dpdk.org/guides/nics/build_and_test.html it looks like for any NIC driver, there is a way to compile and generate a corresponding PMD driver, but I'm really not sure...

ldurandadomia commented 4 years ago

Got it. It is something fundamental. You first have to consider that a NIC has two plane:

In order to be able to use NIC into user space, you have to move both plane into user space.

In order to write a DPDK application you need both mechanisms:

For your DPDK application you have to determine both elements:

pinggit commented 3 years ago

To close the issue...

this term "PMD" means at least 2 things, depending on different context

  1. it means a concrete thing - a special driver. which is different than the "normal" driver. it is specialized and designed by some vendors (intel) , and the purpose is to expose NIC features to the user spaces.
    $ /opt/contrail/bin/dpdk_nic_bind.py --status
    Network devices using DPDK-compatible driver
    ============================================
    0000:09:00.1 '82599 10 Gigabit Dual Port Backplane Connection' drv=igb_uio unused=
    0000:87:00.0 '82599 10 Gigabit Dual Port Backplane Connection' drv=igb_uio unused=

so this igb_uio is the PMD.

  1. it means a group of userspace softwares in general, which, levarage what the special driver (in 1) helped to expose, to manipulate the NIC features and powers. in this context, PMD is NOT a specific process, so you can't expect to show it via "ps " command.