mos-stack / mOS-networking-stack

A Specialized Network Programming Library for Stateful Middleboxes:
http://mos.kaist.edu
Other
103 stars 21 forks source link

Problems with more than 23 flows. #9

Open eratormortimer opened 7 years ago

eratormortimer commented 7 years ago

I tested mOS with the Moongen packet generator. I used a tcp pcap to test the performance of the sample NAT and sample simple_firewall. I used a server with the following configuration: CPU: Intel(R) Xeon(R) CPU E31230 @ 3.20GHz Number of CPUs: 1 Memory: 16 GB Mainboard: X9SCL/X9SCM Mgmt MAC: 00:25:90:75:4c:16 IPMI MAC: 00:25:90:75:49:23 NICs: 2x Intel X540 (1x X540-T2)

with debian-jessie. I used mOS in inline mode. When I send and pcap with more than 23 Flows mOS stops processing the incoming traffic. It is still running but does not show anymore incoming traffic and also does not forward any more traffic. This also applies to udp traffic. I have no idea what could cause this.
Can you help me out? I'm a relative newbie in this so sorry if something important is missing

ajamshed commented 7 years ago

Hi,

Can you please share mos.conf file for your experiments? What are the maximum concurrent connections (at any given time) being setup in your experiment?

Your description was not clear. Are you replaying packet (from pcap files) in your experiments? In that case, you may want to run your experiment in passive mode.

eratormortimer commented 7 years ago

Hey,
thanks for the fast answer and sorry it took me so long.

I only changed my mos.conf according to your documentation on the inline mode:

mos { forward = 1

#######################
##### I/O OPTIONS #####
#######################
# number of memory channels per socket [mandatory for DPDK]
nb_mem_channels = 4

# devices used for MOS applications [mandatory]
netdev {
    dpdk0 0x00FF
    dpdk1 0x00FF
}

#######################
### LOGGING OPTIONS ###
#######################
# NICs to print network statistics per second
# if enabled, mTCP will print xx Gbps and xx pps for RX and TX
stat_print = dpdk0 dpdk1

# A directory contains MOS system log files
mos_log = logs/

########################
## NETWORK PARAMETERS ##
########################
# This to configure static arp table
# (Destination IP address) (Destination MAC address)
arp_table {
}

# This is to configure static routing table
# (Destination address)/(Prefix) (Device name)
route_table {
}

# This is to configure static bump-in-the-wire NIC forwarding table
# DEVNIC_A DEVNIC_B ## (e.g. dpdk0 dpdk1) 
nic_forward_table { 
dpdk0 dpdk1
}

########################
### ADVANCED OPTIONS ###
########################
# if required, uncomment the following options and change them

# maximum concurrency per core [optional / default : 100000]
# (MOS-specific parameter for preallocation)
# max_concurrency = 100000

# disable the ring buffer [optional / default : 0]
# use disabled buffered managment only for standalone monitors.
# end host applications always need recv buffers for TCP!
# no_ring_buffers = 1

# receive buffer size of sockets [optional / default : 8192]
# rmem_size = 8192

# send buffer size of sockets [optional / default : 8192]
# wmem_size = 8192

# tcp timewait seconds [optional / default : 0]
tcp_tw_interval = 30

# tcp timeout seconds [optional / default : 30]
# (set tcp_timeout = -1 to disable timeout checking)
# tcp_timeout = 30

}

I used the test pcaps from tcpreplay: http://tcpreplay.appneta.com/wiki/captures.html
And I looped them. Every one of them did not work. The maximum concurrent connections should not be higher than the amount of flows if I understood you question correctly.

This is my server structure:
image

One server is the DUT and the other one sends out the packets and recieves them on the other port to measure time and such things. Similar setup as in your example of the inline mode so I used that one.

I replayed pcaps and also generated udp packets and sent them through. With both the sample applications stop working when I go over 23 flows.

ajamshed commented 7 years ago

I understand your setup a bit better now. I am not sure what exactly is the problem but can you please verify the following?:

1- Make sure that the IP addresses of dpdk0 and dpdk1 are unassigned (i.e. sudo ifconfig dpdk0 0.0.0.0 up & sudo ifconfig dpdk1 0.0.0.0 up).

2- Try to run synthetic flows by running client, e.g. epwget and server, e.g. epserver across the mOS middlebox. For this setup, you may need 3 machines (1 for the client, 1 for the middlebox, and the last for the server). It will be easier to debug the problem if this setup is tested first.

eratormortimer commented 7 years ago

Hey, sorry again for the long waiting time.

  1. I made sure all interfaces had unassigned ip addresses, it did not change anything.

  2. I sadly can't really test it with the sample applications you mentioned, I do not have a third server for testing. And I couldn't get epwget and epserver running on the same machine.