Open NDani23 opened 1 year ago
Well the NIC is virtualized in the kernel. Of course it has the same or equal throughput.
Because your client BUF_SIZE
is 16384, may be you can try adjustment pkt_tx_delay=0
to pkt_tx_delay=1
.
And because test with one connection, so you need modify net.inet.tcp.delayed_ack=1
to net.inet.tcp.delayed_ack=0
.
And retry test it.
pkt_tx_delay
and net.inet.tcp.delayed_ack
need set to different values in different test scenarios to achieve better performance.
And you can try adjustment tso=0
in config.ini
to test it.
Hi F-stack team!
I'm experimenting with f-stack for a few weeks now. I wrote a simple program that uses F-stack to generate as much traffic as possible. I wanted to compare f-stack's performance with the reguler posix socket API, so i wrote a kinda identical server-client program that uses regular posix sockets (with epoll).
When running both client and server with f-stack, ff_traffic in the server side shows that i receive data with 350-400 Mb/s . However, with the regular posix API, i measured around 450-500 Mb/s (I used bmon for measuring).
Could you please help me find out what am I doing wrong? I'm a beginner in this field so I would be grateful for every advice.
I used 2 oracle ubuntu VM-s for testing with Intel PRO/1000 MT Desktop (82540EM) adapter. Because of this, i could only run both F-stack client and server with only 1 core.
Client side code: https://github.com/NDani23/Tgen/blob/main/client.c Server side code: https://github.com/NDani23/Tgen/blob/main/main.c
I tought maybe something with HugePage allocation could went wrong
Sorry for the long post.
Keep up the good work!