mtcp-stack / mtcp

mTCP: A Highly Scalable User-level TCP Stack for Multicore Systems
Other
1.98k stars 436 forks source link

about mtcp performance #208

Closed wtao0221 closed 6 years ago

wtao0221 commented 6 years ago

Hi, mTCP team,

Could you please give some hints on how to reconduct the Figure 8 (performance on the throughput of accepting connections) in the mTCP paper?

Can I just modify the ProcessTCPPacket() in tcp_in.c to send the RST packet when the sequence number is validated?

ajamshed commented 6 years ago

Hi @wtao0221,

You don't have to make invasive changes to the mTCP core stack to re-do this test. Please see if you can use mtcp_abort() which is available in api.c. You may need to export the function prototype in a header file.

wtao0221 commented 6 years ago

Hi, @ajamshed

And (1) does mTCP support SO_REUSEPORT option? (2) does in one mTCP context, can there exist two or more listening sockets?

ajamshed commented 6 years ago

@wtao0221,

1) mTCP implicitly supports SO_REUSEPORT sockets by default: multiple mTCP contexts (that are running on different CPU cores) can bind to same port numbers. You do not need to set socket option to activate this feature.

2) Yes. You can listen on multiple ports within the same mTCP context. You would need to pass all listening sockets in the epoll list.

wtao0221 commented 6 years ago

Hi, @ajamshed

When I listen on multiple ports in mTCP and try to send data, I receive Segmentation Fault.

ajamshed commented 6 years ago

You will have to give more details about the problem that you are facing.

wtao0221 commented 6 years ago

Hi, @ajamshed

I just do some minor modifications to the apps/example/epsercer.c to let the server listen to two different ports (i.e., 80 and 81). I only use 1 core and the process starts w/o errors.

When I start the client and the server crashes.

Please see gdb output below.

Thread 5 "service" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fff2ac07700 (LWP 19604)]
0x0000000000458931 in Handle_TCP_ST_SYN_RCVD (ack_seq=1078779256, tcph=<optimized out>, cur_stream=0x7fffe800b400, cur_ts=2814296331, mtcp=0x7fff24000970) at tcp_in.c:812
812                     ret = StreamEnqueue(listener->acceptq, cur_stream);
(gdb) bt
#0  0x0000000000458931 in Handle_TCP_ST_SYN_RCVD (ack_seq=1078779256, tcph=<optimized out>, cur_stream=0x7fffe800b400, cur_ts=2814296331, mtcp=0x7fff24000970) at tcp_in.c:812
#1  ProcessTCPPacket (mtcp=0x7fff24000970, cur_ts=2814296331, cur_ts@entry=3140804920, ifidx=ifidx@entry=0, iph=0x7ffff3e72e0e, ip_len=<optimized out>) at tcp_in.c:1248
#2  0x0000000000457881 in ProcessIPv4Packet (mtcp=<optimized out>, cur_ts=3140804920, cur_ts@entry=2814296331, ifidx=ifidx@entry=0, pkt_data=<optimized out>, len=<optimized out>) at ip_in.c:54
#3  0x00000000004577cd in ProcessPacket (mtcp=mtcp@entry=0x7fff24000970, ifidx=ifidx@entry=0, cur_ts=cur_ts@entry=2814296331, pkt_data=<optimized out>, len=<optimized out>) at eth_in.c:37
#4  0x000000000044f562 in RunMainLoop (ctx=0x7fff240008c0) at core.c:783
#5  MTCPRunThread (arg=<optimized out>) at core.c:1153
#6  0x00007ffff79b96ba in start_thread (arg=0x7fff2ac07700) at pthread_create.c:333
#7  0x00007ffff6e4641d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
ajamshed commented 6 years ago

@wtao0221:

It will be easier if you can share the code. Do you have it somewhere in a public repository?

wtao0221 commented 6 years ago

Hi, @ajamshed

Sorry. It seems to be my own coding error.

BTW, could you please give some hints on reconducting the Figure. 7(c) (i.e. mTCP's throughput vs. the message size)?

And "one message per connection" means that when we send enough data (i.e. one-message size), we close that connection? If so, what's the concurrency here?

ajamshed commented 6 years ago

@wtao0221:

We used a simple ping-pong experiment: A client connects to the server and sends a n-byte message (both applications are mTCP-based). The server sends back a message of the same size. The client then closes the connection.

If I remember correctly, we were using 8K concurrent connections. Please also note that we were using PSIO driver for our experiments then.

wtao0221 commented 6 years ago

@ajamshed

Thanks for your patient explanation.