mtcp-stack / mtcp

mTCP: A Highly Scalable User-level TCP Stack for Multicore Systems
Other
1.98k stars 435 forks source link

Incorporating mtcp with an existing DPDK application #265

Open azamikram opened 4 years ago

azamikram commented 4 years ago

Hello, My current setup is that I have two machines running DPDK applications. One server sends a request, the other receives it, pass it to some other threads for processing and finally an other thread sends the response back. You can think of these applications as l2fwd example from DPDK with some threads sitting on top of DPDK's.

Now, I want to incorporate mtcp in these applications. I built mtcp and ran the examples applications which worked just fine. As far as I understand, I don't need to deal with DPDK directly. I can just use the dpdk0 interface (ip assigned to the interface) to send and receive the packets on both machines. On top of mtcp threads I can have my own threads to process the packets and send the reply back. Is my understanding correct or do I need to make any other changes to my applications?

ajamshed commented 4 years ago

@azam-noob ,

This is a very open-ended question. Maybe I need more details before I can comment on your design. I suggest that you handle all your processing work in the same thread that receives message from its peer (via mtcp_read()). Spawning more threads to handle requests/responses would incur additional overhead such as thread switching, message passing from one thread to another etc.. This will lead to necessary CPU cache misses.

azamikram commented 4 years ago

Thanks @ajamshed for a quick response.

In my applications, every thread is running on a separate core so there is no context switching, and the communication between the threads is through ring buffers therefore, there is no locks in between.

The reason for creating multiple threads was, some threads in the pipeline can take a bit more time to process a packet and meanwhile more packets will be coming. Therefore, it made sense to have a thread which continuously pool packets from NIC and enqueued those in different ring buffers so others can process it resulting in minimum packet drops.

Now if my understanding is correct, I can replace the thread which pools packets from NIC with mtcp thread(s) and from there on my pipeline remains the same. Am I correct?

ajamshed commented 4 years ago

@azam-noob ,

Apologies foe the delayed response. Please see my answers below:

Thanks @ajamshed for a quick response.

In my applications, every thread is running on a separate core so there is no context switching, and the communication between the threads is through ring buffers therefore, there is no locks in between.

There is a thread switching overhead that may be costly if your link is 10+ Gbps (depending on the workload type). Ring buffers can be lock-less, but I am assuming that the implementation will most likely be using memory barriers to ensure serialization for ring buffer access(es).

The reason for creating multiple threads was, some threads in the pipeline can take a bit more time to process a packet and meanwhile more packets will be coming. Therefore, it made sense to have a thread which continuously pool packets from NIC and enqueued those in different ring buffers so others can process it resulting in minimum packet drops.

Based on my previous comments, I suggest that you stick with run-to-completion model (N threads, each thread doing all operations on the same core) by creating N NIC queues, each core affinitized to each queue). The pipeline model will be more difficult to integrate with mTCP context.

If you are still insistent on using run-to-completion model, please see my comment below:

Now if my understanding is correct, I can replace the thread which pools packets from NIC with mtcp thread(s) and from there on my pipeline remains the same. Am I correct?

You need to set mtcp.conf so that it uses only 1 CPU. As a result, only 1 NIC queue shall be created and the mTCP stack will be based on a model that you wish to work on.

I will be happy to answer an follow-up questions (hopefully my answers will be more prompt in the future :)).