k8snetworkplumbingwg / sriov-cni

DPDK & SR-IOV CNI plugin
Apache License 2.0
307 stars 146 forks source link

feature: Allow configuration of num_queues for vf #286

Closed cyclinder closed 9 months ago

cyclinder commented 10 months ago

What would you like to be added?

Allow configuration of num_queues for vf, If the queue is too small, it affects network performance.

What is the use case for this feature / enhancement?

zeeke commented 10 months ago

Hi @cyclinder , Can you please elaborate more about this feature? How would you change the number of queues for a VF? If the same command can be applied to any ethernet NIC that supports multiqueues, this feature might be implemented in a metaplugin, like tuning CNI. WDYT?

cyclinder commented 10 months ago

If the same command can be applied to any ethernet NIC that supports multiqueues, this feature might be implemented in a metaplugin, like tuning CNI.

Hi @zeeke, In reality, this configuration can only be set when creating the device, and the netlink and ethtool Golang libraries are unable to update this property (correct me if I'm wrong). I've made numerous attempts, but it still hasn't worked. We can easily set this property when creating the device; please see https://github.com/containernetworking/plugins/pull/986.

adrianchiris commented 10 months ago

i think for VF these queues are determined by the driver. you cannot set them with ip link add command as VF netdev already exists. (in contrast to macvlan/ipvlan which the cni creates a new virtual netdevice)

cyclinder commented 10 months ago

i think for VF these queues are determined by the driver.

well, I think you're right. Without any configuration, I checked the NumQueues for all VFs through the command, and they are all the same.

root@10-20-1-220:~# ip --details link show enp4s1f10v0
90: enp4s1f10v0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 6a:ad:29:af:aa:f9 brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 68 maxmtu 9978 addrgenmode eui64 numtxqueues 88 numrxqueues 11 gso_max_size 65536 gso_max_segs 65535 tso_max_size 524280 tso_max_segs 65535 gro_max_size 65536 parentbus pci parentdev 0000:04:01.2
root@10-20-1-220:~# ip --details link show enp4s1f11v1
91: enp4s1f11v1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 82:f4:db:e9:ba:93 brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 68 maxmtu 9978 addrgenmode eui64 numtxqueues 88 numrxqueues 11 gso_max_size 65536 gso_max_segs 65535 tso_max_size 524280 tso_max_segs 65535 gro_max_size 65536 parentbus pci parentdev 0000:04:01.3
root@10-20-1-220:~# ip --details link show enp4s1f12v2
92: enp4s1f12v2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether e6:98:83:d2:09:47 brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 68 maxmtu 9978 addrgenmode eui64 numtxqueues 88 numrxqueues 11 gso_max_size 65536 gso_max_segs 65535 tso_max_size 524280 tso_max_segs 65535 gro_max_size 65536 parentbus pci parentdev 0000:04:01.4

"Can you tell me more about the driver settings? thanks :)

adrianchiris commented 10 months ago

Can you tell me more about the driver settings? thanks :)

Unfortunately i dont have more information. you can try referring to docs of a specific network driver.

SchSeba commented 9 months ago

ask mentioned above it's not possible to change the number of queues for sriov virtual functions and that depends on every network card. closing this issue feel free to reopen if needed