pinggit / dpdk-contrail-book

contrail dpdk day one book
9 stars 3 forks source link

verify pmd #1

Closed pinggit closed 3 years ago

pinggit commented 4 years ago

about: https://github.com/pinggit/dpdk-contrail-book/blob/master/ContrailPerformanceGuidev3.2.docx.reorga.adoc#linux-drivers-for-pmd

  1. how to verify which pmd is in use in a setup.

    • less /proc/modules?
    • modprobe ?
  2. pmd(user space driver) vs the "kernel drivers" supporting it, who is who?

according to:

Different PMDs may require different kernel drivers in order to work properly. Depending on the PMD being used, a corresponding kernel driver should be loaded and bound to the network ports. Before loading, make sure that each NIC has been flashed with the latest version of NVM/firmware.

pinggit commented 4 years ago

according to my understanding:

these are the "kernel drivers" in order for the PMD to work properly

[heat-admin@jnprctdpdk01 ~]$ less /proc/modules | grep -i uio
uio_pci_generic 12588 2 - Live 0xffffffffc0746000
uio 19338 5 uio_pci_generic, Live 0xffffffffc073c000

in this case I see 2 kernel modules (drivers):

how to display the "PMD" in use? not this one I believe

[heat-admin@jnprctdpdk01 ~]$ ps aux | grep -i pmd
rabbitmq   16917  0.0  0.0  48904   488 ?        S    Apr12   1:55 /usr/lib64/erlang/erts-7.3.1.6/bin/epmd -daemon

ANSWER (LD): Here you are displaying PMD drivers that are available. In my labs, I'm not using vfio, But it could be possible to load vfio drivers and to use it as PMD for Contrail Physical NIC. See previous comment.

pinggit commented 4 years ago

this table is confusing too. does not look correct.

[cols=",,,,",options="header",]
|====
|                 |*RHEL DPDK*               |*Ubuntu DPDK*|*RHEL SRIOV (VF)**|*Ubuntu SRIOV (VF)**
|*igb_uio*        |No (no dkms support)      |Yes (dkms)   |No                |Yes
|*uio_pci_generic*|No (not supported by RHEL)|Yes          |No                |No
|*vfio_pci*       |Yes                       |Yes          |Yes               |Yes
|====

ANSWER (LD): I agree, this is unclear. In fact if you are mixing SRIOV with DPDK (I guess if you are using a VF as vrouter Physical NIC) you have some limitation, VFIO PCI anyway should be used with SRIOV. All these PMD have their own story ... Latest one is VFIO and should be the prefered choice.

pinggit commented 4 years ago

found it. I forgot this script. so what it says is the "kernel driver" which is supporting the corresponding poll mode driver. this is same as what less /proc/modules gaves. so RHEL does support uio_pci_generic. but again question is how to display the pmd userspace process?

(vrouter-agent-dpdk)[root@jnprctdpdk01 /]$ /opt/contrail/bin/dpdk_nic_bind.py -s

Network devices using DPDK-compatible driver
============================================
0000:02:01.0 '82540EM Gigabit Ethernet Controller' drv=uio_pci_generic unused=e1000
0000:02:02.0 '82540EM Gigabit Ethernet Controller' drv=uio_pci_generic unused=e1000

Network devices using kernel driver
===================================
0000:03:00.0 'Virtio network device' if= drv=virtio-pci unused=virtio_pci,uio_pci_generic

Other network devices
=====================
<none>

ANSWER (LD): Correct. See my first comment. This is not the only way. But this is probably the best.

pinggit commented 4 years ago

move the answer from laurent as comment here:


ANSWER (LD):

So first, you have to know which NIC are using your DPDK application.

For the vrouter we can get it in log files:

# grep PCI /var/log/containers/contrail/contrail-vrouter-dpdk.log
2020-05-27 05:39:51,471 EAL: PCI device 0000:02:01.0 on NUMA socket -1
2020-05-27 05:39:51,788 EAL: PCI device 0000:02:02.0 on NUMA socket -1
2020-05-27 05:39:56,152 VROUTER:     bond member eth device 0 PCI 0000:02:01.0 MAC 52:54:00:bd:ec:13
2020-05-27 05:39:56,156 VROUTER:     bond member eth device 1 PCI 0000:02:02.0 MAC 52:54:00:d4:cd:fb

Or when using RedHat Operating Systems with Contrail:

# grep BIND /etc/sysconfig/network-scripts/ifcfg-vhost0
BIND_INT=0000:02:01.0,0000:02:02.0

Next, you have to check which PMD driver is used by DPDK:

# docker exec -it contrail-vrouter-agent-dpdk bash
(vrouter-agent-dpdk)[root@jnprctdpdk01 /]$ /opt/contrail/bin/dpdk_nic_bind.py -s

Network devices using DPDK-compatible driver
============================================
0000:02:01.0 '82540EM Gigabit Ethernet Controller' drv=uio_pci_generic unused=e1000
0000:02:02.0 '82540EM Gigabit Ethernet Controller' drv=uio_pci_generic unused=e1000

Or when using RedHat Operating Systems with Contrail, we have the information in vhost0 config file:

# cat /etc/sysconfig/network-scripts/ifcfg-vhost0
DRIVER=uio_pci_generic

You can also check PCI devices with one of the command:

# lshw -class network | more
WARNING: you should run this program as super-user.
  *-network:0
       description: Ethernet controller
       product: 82540EM Gigabit Ethernet Controller
       vendor: Intel Corporation
       physical id: 1
       bus info: pci@0000:02:01.0
       version: 03
       width: 32 bits
       clock: 33MHz
       capabilities: bus_master rom
       configuration: driver=uio_pci_generic latency=0
       resources: irq:23 memory:fda80000-fda9ffff ioport:c000(size=64) memory:fda00000-fda3ffff
  *-network:1
       description: Ethernet controller
       product: 82540EM Gigabit Ethernet Controller
       vendor: Intel Corporation
       physical id: 2
       bus info: pci@0000:02:02.0
       version: 03
       width: 32 bits
       clock: 33MHz
       capabilities: bus_master rom
       configuration: driver=uio_pci_generic latency=0
       resources: irq:20 memory:fdaa0000-fdabffff ioport:c040(size=64) memory:fda40000-fda7ffff

# lspci -s 02:02.0 -k
02:02.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03)
        Subsystem: Red Hat, Inc. QEMU Virtual Machine
        Kernel driver in use: uio_pci_generic
        Kernel modules: e1000

PMD drivers have also a Kernel module that has to be loaded in order to be able to work:

$ lsmod | grep uio
uio_pci_generic        12588  2
uio                    19338  5 uio_pci_generic

A last check is showing that our 2 Ethernet devices are well bound with this driver:

# ls -l /sys/bus/pci/drivers/uio_pci_generic
total 0
lrwxrwxrwx. 1 root root    0 May 27 22:33 0000:02:01.0 -> ../../../../devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0
lrwxrwxrwx. 1 root root    0 May 27 22:34 0000:02:02.0 -> ../../../../devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:02.0
--w-------. 1 root root 4096 May 28 12:19 bind
lrwxrwxrwx. 1 root root    0 May 28 11:47 module -> ../../../../module/uio_pci_generic
--w-------. 1 root root 4096 May 27 22:34 new_id
--w-------. 1 root root 4096 May 28 12:19 remove_id
--w-------. 1 root root 4096 May 28 12:19 uevent
--w-------. 1 root root 4096 May 27 22:33 unbind
pinggit commented 4 years ago

From: Ping Song Sent: Thursday, May 28, 2020 12:58 PM To: Laurent Antoine Durand ldurand@juniper.net; Przemyslaw Grygiel pgrygiel@juniper.net; Kiran KN kirankn@juniper.net; Damian Szeluga dszeluga@juniper.net Subject: RE: vrouter/DPDK day one book: about "pmd"

But I do see PMD is illustrated as a user space process.

“ When DPDK is used, Network interfaces are no more managed in Kernel space. Legacy NIC driver which is usually used to manage the NIC has to be replaced by a new driver which is able to run into user space. This new drive, called Poll Mode Driver (PMD) will be used to manage the network interface into user space with the DPDK library.

And then:

“ Different PMDs may require different kernel drivers in order to work properly. Depending on the PMD being used, a corresponding kernel driver should be loaded and bound to the network ports. Before loading, make sure that each NIC has been flashed with the latest version of NVM/firmware. “

Sounds like 2 ends: • Driver/module in kernel (back end?) • A process in user space (front end?)

So far all what we’ve checked via CLI, is to find the kernel end. What I’m interested to know is, how to located the process running in user mode. That is the one is doing the real work (namely “pull mode”). And, I believe this should NOT be same thing as “the pulling threads”?

Regards ping

From: Laurent Antoine Durand ldurand@juniper.net Sent: Thursday, May 28, 2020 12:33 PM To: Ping Song pings@juniper.net; Przemyslaw Grygiel pgrygiel@juniper.net; Kiran KN kirankn@juniper.net; Damian Szeluga dszeluga@juniper.net Subject: RE: vrouter/DPDK day one book: about "pmd"

Q: where is the PMD “user space process”?

Not fully clear for me. PMD is replacing “legacy” card driver. On part of PMD is running into Kernel space. We also have to consider to topics:

I’ve put these info into the diagram I made.

Laurent

Juniper Business Use Only From: Ping Song pings@juniper.net Sent: 28 May 2020 17:47 To: Przemyslaw Grygiel pgrygiel@juniper.net; Laurent Antoine Durand ldurand@juniper.net; Kiran KN kirankn@juniper.net; Damian Szeluga dszeluga@juniper.net Subject: RE: vrouter/DPDK day one book: about "pmd"

Regarding PMD (issue#2): https://github.com/pinggit/dpdk-contrail-book/issues/1#issue-625774372 thanks again, very extensive.

We know how to identify the driver (kernel), but there is one question not answered yet: where is the PMD “user space process”?

thanks.

pinggit commented 4 years ago

from Laurent:

Detailed explanations: https://codilime.com/how-can-dpdk-access-devices-from-user-space/

PMD on left, regular Kernel Driver on Right. In red rectangle, the part which is moved into user space. image

pinggit commented 3 years ago

see #12