mTCP is a highly scalable user-level TCP stack for multicore systems. mTCP source code is distributed under the Modified BSD License. For more detail, please refer to the LICENSE. The license term of io_engine driver and ported applications may differ from the mTCP’s.
We require the following libraries to run mTCP.
libdpdk
(Intel's DPDK package*) or libps
(PacketShader I/O engine library) or netmap
driver libnuma
libpthread
librt
libgmp
(for DPDK/ONVM driver)Compling PSIO/DPDK/NETMAP/ONVM driver requires kernel headers.
apt-get install linux-headers-$(uname -r)
We have modified the dpdk package to export net_device stat data
(for Intel-based Ethernet adapters only) to the OS. To achieve this, we have
created a new LKM dpdk-iface-kmow. We also modified
mk/rte.app.mk
file to ease the compilation
process of mTCP applications. We recommend using our package for DPDK
installation.
You can optionally use CCP's congestion control implementation rather than mTCP's. You'll have wider selection of congestion control algorithms with CCP. (Currently this feature is experimental and under revision.)
Using CCP for congestion control (disabled by
default), requires the CCP library. If you would like to enable CCP, simply run
configure script with --enable-ccp
option.
Install Rust. Any installation method should be fine. We recommend using rustup:
curl https://sh.rustup.rs -sSf | sh -- -y -v --default-toolchain nightly
Install the CCP command line utility:
cargo install portus --bin ccp
Build the library (comes with Reno and Cubic by default, use ccp get
to add others):
ccp makelib
You will also need to link your application against -lccp
and -lstartccp
as demonstrated in apps/example/Makefie.in
mtcp: mtcp source code directory
io_engine: event-driven packet I/O engine (io_engine)
dpdk - Intel's Data Plane Development Kit
apps: mTCP applications
util: useful source code for applications
config: sample mTCP configuration files (may not be necessary)
mTCP can be prepared in four ways.
Download DPDK submodule.
git submodule init
git submodule update
Setup DPDK.
./setup_mtcp_dpdk_env.sh [<path to $RTE_SDK>]
Press [15] to compile x86_64-native-linuxapp-gcc version
Press [18] to install igb_uio driver for Intel NICs
Press [22] to setup 2048 2MB hugepages
Press [24] to register the Ethernet ports
Press [35] to quit the tool
Only those devices will work with DPDK drivers that are listed on this page: http://dpdk.org/doc/nics. Please make sure that your NIC is compatible before moving on to the next step.
We use dpdk/
submodule as our DPDK driver. FYI, you can pass a different
dpdk source directory as command line argument.
Bring the dpdk compatible interfaces up, and then set RTE_SDK and RTE_TARGET environment variables. If you are using Intel NICs, the interfaces will have dpdk prefix.
sudo ifconfig dpdk0 x.x.x.x netmask 255.255.255.0 up
export RTE_SDK=`echo $PWD`/dpdk
export RTE_TARGET=x86_64-native-linuxapp-gcc
Setup mtcp library:
./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET
make
By default, mTCP assumes that there are 16 CPUs in your system. You can set the CPU limit, e.g. on a 32-core system, by using the following command:
./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET CFLAGS="-DMAX_CPUS=32"
Please note that your NIC should support RSS queues equal to the MAX_CPUS value (since mTCP expects a one-to-one RSS queue to CPU binding).
In case ./configure
script prints an error, run the
following command; and then re-do step-4 (configure again):
autoreconf -ivf
checksum offloading in the NIC is now ENABLED (by default)!!!
./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET --disable-hwcsum
to disable checksum offloading.check libmtcp.a
in mtcp/lib
check header files in mtcp/include
check example binary files in apps/example
Check the configurations in apps/example
epserver.conf
for server-side configurationepwget.conf
for client-side configurationRun the applications!
You can revert back all your changes by running the following script.
./setup_linux_env.sh [<path to $RTE_SDK>]
make in io_engine/driver:
make
install the driver:
./install.py <# cores> <# cores>
Setup mtcp library:
./configure --with-psio-lib=<$path_to_ioengine>
# e.g. ./configure --with-psio-lib=`echo $PWD`/io_engine
make
By default, mTCP assumes that there are 16 CPUs in your system. You can set the CPU limit, e.g. on a 8-core system, by using the following command:
./configure --with-psio-lib=`echo $PWD`/io_engine CFLAGS="-DMAX_CPUS=8"
Please note that your NIC should support RSS queues equal to the MAX_CPUS value (since mTCP expects a one-to-one RSS queue to CPU binding).
In case ./configure
script prints an error, run the
following command; and then re-do step-3 (configure again):
autoreconf -ivf
check libmtcp.a
in mtcp/lib
check header files in mtcp/include
check example binary files in apps/example
Check the configurations in apps/example
epserver.conf
for server-side configurationepwget.conf
for client-side configurationRun the applications!
NEW: Now you can run mTCP applications (server + client) locally.
A local setup is useful when only 1 machine is available for the experiment.
ONVM configurations are placed as .conf
files in apps/example directory.
ONVM basics are explained in https://github.com/sdnfv/openNetVM.
Before running the applications make sure that onvm_mgr is running.
Also, no core overlap between applications and onvm_mgr is allowed.
Set up the dpdk interfaces:
./setup_mtcp_onvm_env.sh
Next bring the dpdk-registered interfaces up. This can be setup using:
sudo ifconfig dpdk0 x.x.x.x netmask 255.255.255.0 up
Setup mtcp library
./configure --with-dpdk-lib=$<path_to_dpdk> --with-onvm-lib=$<path_to_onvm_lib>
# e.g. ./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET --with-onvm-lib=`echo $ONVM_HOME`/onvm
make
By default, mTCP assumes that there are 16 CPUs in your system. You can set the CPU limit, e.g. on a 32-core system, by using the following command:
./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET --with-onvm-lib=$<path_to_onvm_lib> CFLAGS="-DMAX_CPUS=32"
Please note that your NIC should support RSS queues equal to the MAX_CPUS value (since mTCP expects a one-to-one RSS queue to CPU binding).
In case ./configure
script prints an error, run the
following command; and then re-do step-4 (configure again):
autoreconf -ivf
checksum offloading in the NIC is now ENABLED (by default)!!!
this only works for dpdk at the moment
use ./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET --with-onvm-lib=$<path_to_onvm_lib> --disable-hwcsum
to disable checksum offloading.
check libmtcp.a
in mtcp/lib
check header files in mtcp/include
check example binary files in apps/example
Check the configurations in apps/example
epserver.conf
for server-side configurationepwget.conf
for client-side configurationRun the applications!
You can revert back all your changes by running the following script.
./setup_linux_env.sh
Notes
Once you have started onvm_mgr, sometimes an mTCP application may fail to get launched due to an error resembling the one mentioned below:
EAL: FATAL: Cannot init memory
Cannot mmap memory for rte_config at [0x7ffff7fb6000], got [0x7ffff7e74000] - please use '--base-virtaddr' option
EAL: Cannot mmap device resource file /sys/bus/pci/devices/0000:06:00.0/resource3 to address: 0x7ffff7ff1000
To prevent this, use the base virtual address parameter to run the ONVM manager (core list arg 0xf8
isn't actually used by mtcp NFs but is required), e.g.:
cd openNetVM/onvm
./go.sh 1,2,3 1 0xf8 -s stdout -a 0x7f000000000
See README.netmap for details.
mTCP runs on Linux-based operating systems (2.6.x for PSIO) with generic x86_64 CPUs, but to help evaluation, we provide our tested environments as follows.
Intel Xeon E5-2690 octacore CPU @ 2.90 GHz 32 GB of RAM (4 memory channels)
10 GbE NIC with Intel 82599 chipset (specifically Intel X520-DA2)
Debian 6.0.7 (Linux 2.6.32-5-amd64)
Intel Core i7-3770 quadcore CPU @ 3.40 GHz 16 GB of RAM (2 memory channels)
10 GbE NIC with Intel 82599 chipset (specifically Intel X520-DA2)
Ubuntu 10.04 (Linux 2.6.32-47)
Event-driven PacketShader I/O engine (extended io_engine-0.2)
We tested the DPDK version (polling driver) with Linux-3.13.0 kernel.
mTCP currently runs with fixed memory pools. That means, the size of TCP receive and send buffers are fixed at the startup and does not increase dynamically. This could be performance limit to the large long-lived connections. Be sure to configure the buffer size appropriately to your size of workload.
The client side of mTCP supports mtcp_init_rss() to create an address pool that can be used to fetch available address space in O(1). To easily congest the server side, this function should be called at the application startup.
The supported socket options are limited for right now. Please refer to the mtcp/src/api.c for more detail.
The counterpart of mTCP should enable TCP timestamp.
mTCP has been tested with the following Ethernet adapters:
How can I quit the application?
My application doesn't use the address specified from ifconfig.
For some Linux distros(e.g. Ubuntu), NetworkManager may re-assign a different IP address, or delete the assigned IP address.
Disable NetworkManager temporarily if that's the case. NetworkManager will be re-enabled upon reboot.
sudo service network-manager stop
Can I statically set the routing or arp table?
Do not remove I/O driver (ps_ixgbe/igb_uio
) while running mTCP
applications. The application will panic!
Use the ps_ixgbe/dpdk driver contained in this package, not the one from some other place (e.g., from io_engine github).
GitHub issue board is the preferred way to report bugs and ask questions about mTCP.
CONTACTS FOR THE AUTHORS
User mailing list <mtcp-user at list.ndsl.kaist.edu>
EunYoung Jeong <notav at ndsl.kaist.edu>
M. Asim Jamshed <ajamshed at ndsl.kaist.edu>