Closed Shadowu410 closed 6 years ago
Try to use smaller tx descriptor_number. You can modify netif_defs->device xxx->tx->descriptor_number to 256 or 128 in /etc/dpvs.conf.
Finally decide what's up. tx/rx descriptor number both set to 1024. The vmxnet3_dev_tx_queue_setup() in dpdk-stable-16.07.2/drivers/net/vmxnet3/vmxnet3_rxtx.c has a argument validation.
if ((tx_conf->txq_flags & ETH_TXQ_FLAGS_NOXSUMSCTP) != ETH_TXQ_FLAGS_NOXSUMSCTP) { PMD_INIT_LOG(ERR, "SCTP checksum offload not supported"); return -EINVAL; }
These code returns -22 to netif_port_start().
Wondering if it can be solve by configuring txq_flags.
The origin alibaba/LVS project has no support on SCTP, thus leads to this failure. However, SCTP has been supported in some earlier kernel version (like 2.6.32, with no fullnat support). I modify line 3029 of src/netif.c from "txconf.txq_flags = 0;" to "txconf.txq_flags = 512;", it makes dpvs run on vmxnet3 device normally, but dpvs still has no supports on SCTP. Also I am looking for your better solution to solve this issue.
For SCTP
, it's available in ipvs
module of mainline Linux (linux/net/netfilter/ipvs/ip_vs_proto_sctp.c
), and the Linux mainline does not support FullNAT
.
Howerver, the Linux SCTP ipvs
codes are not that much, only about 500 lines. It's possible to refer dpvs/src/ip_vs_proto_tcp.c
, then implement SCTP
FullNAT
on DPVS
.
But for now, we have no schedule for SCTP
yet. So, if you can contribute for SCTP
, we'll be very glad and will provide support about DPVS.
Though supporting SCTP
is possible, it will take times. Before it's available, is there any solution to make it always pass SCTP
checksum offload supportment check? I dont know whether it is suits for dpvs
to change that line in src/netif.c
.
In my situation, dpvs runs well, but another problem raised. When I add laddr for dpvs, using
./ipvsadm --add-laddr -t $RS:$PORT -z $LADDR -F dpdk1
the DPDK port stop responding any traffic includes ARP and ICMP, leaves log like
MSGMGR: [sockopt_msg_send:msg#400] errcode set in sockopt msg reply: failed dpdk api
in dpvs.log. Any idea?
DPDK API fails because your NIC (vmxnet3) do not support Flow Director. For that kind of NICs, FullNAT and SNAT are not supported. http://dpdk.org/doc/guides/nics/overview.html#id1
GG. Thank you sir.
As what this title say, netif_port_start() fail to config tx queue, with rte_eth_tx_queue_setup() in dpdk returns -22. After inspection, i found port id =0, queue id =0, tx descriptor number =1024, socket id=0 and a not null txconf pointer passed to rte_eth_tx_queue_setup().
here is dpvs logs.
EAL: Probing VFIO support... PMD: bnxt_rte_pmd_init() called for (null) EAL: PCI device 0000:0b:00.0 on NUMA socket -1 EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd EAL: PCI device 0000:13:00.0 on NUMA socket -1 EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd CFG_FILE: Opening configuration file '/etc/dpvs.conf'. CFG_FILE: log_level = WARNING NETIF: dpdk0:rx_queue_number = 8 NETIF: worker cpu1:dpdk0 rx_queue_id += 0 NETIF: worker cpu1:dpdk0 tx_queue_id += 0 NETIF: worker cpu2:dpdk0 rx_queue_id += 1 NETIF: worker cpu2:dpdk0 tx_queue_id += 1 NETIF: worker cpu3:dpdk0 rx_queue_id += 2 NETIF: worker cpu3:dpdk0 tx_queue_id += 2 NETIF: worker cpu4:dpdk0 rx_queue_id += 3 NETIF: worker cpu4:dpdk0 tx_queue_id += 3 NETIF: worker cpu5:dpdk0 rx_queue_id += 4 NETIF: worker cpu5:dpdk0 tx_queue_id += 4 NETIF: worker cpu6:dpdk0 rx_queue_id += 5 NETIF: worker cpu6:dpdk0 tx_queue_id += 5 NETIF: worker cpu7:dpdk0 rx_queue_id += 6 NETIF: worker cpu7:dpdk0 tx_queue_id += 6 NETIF: worker cpu8:dpdk0 rx_queue_id += 7 NETIF: worker cpu8:dpdk0 tx_queue_id += 7 NETIF: netif_port_start: fail to config dpdk0:tx-queue-0 DPVS: Start dpdk0 failed, skipping ... Kni: update maddr of dpdk0 Failed!