High performance container overlay networks on Linux. Enabling RDMA (on both InfiniBand and RoCE) and accelerating TCP to bare metal performance. Freeflow requires zero modification on application code/binary.
MIT License
597
stars
88
forks
source link
the CPU overheads increase much when running a rsocket based app #9
In your paper "FreeFlow-Software-based Virtual RDMA Networking for Containerized Clouds", you compared native TCP with FreeFlow + rsocket, and verified FreeFlow always outperforms Weave both for throughput and latency. In our test, we have obtained the similar results that support your results, but the CPU overheads were higher than we imagine.
The CPU utilization ratio only decreases 20% to 30% than Weave. We initially consider that using rsocket will bring higher CPU overheads,
and the loss of CPU increases 50% when compared with ib_send_bw. So we want to know if you got similar problems, or our test results were wrong.
In your paper "FreeFlow-Software-based Virtual RDMA Networking for Containerized Clouds", you compared native TCP with FreeFlow + rsocket, and verified FreeFlow always outperforms Weave both for throughput and latency. In our test, we have obtained the similar results that support your results, but the CPU overheads were higher than we imagine. The CPU utilization ratio only decreases 20% to 30% than Weave. We initially consider that using rsocket will bring higher CPU overheads, and the loss of CPU increases 50% when compared with ib_send_bw. So we want to know if you got similar problems, or our test results were wrong.