M0dEx / quincy

QUIC-based VPN
MIT License
99 stars 10 forks source link

Performance #10

Open M0dEx opened 1 year ago

M0dEx commented 1 year ago

The performance as of 0.1.6 is worse than expected.

Between two virtual machines on the same virtualized network (capable of about 30 Gbps of throughput), Quincy only manages:

with an MTU of 1400 bytes.

Server -> Client

Initial profiling did not yield anything suspicious, other than the fact that QuincyTunnel::process_inbound_traffic takes more time (had more samples) than QuincyTunnel::process_outbound_traffic during the Server -> Client data transfer, which is odd, as most of the data transfered should be going through QuincyTunnel::process_inbound_traffic.

The CPU usage on the Server virtual machine is also only 60 % balanced across all cores, which could mean either either too much IO, or that the Client is the bottleneck.

The CPU usage on the Client is much higher, in the 90s.

Server flamechart: s2c-server

Client flamechart: s2c-client

Client -> Server

Pretty much the same behaviour as above - QuincyClient::process_inbound_traffic takes more time than QuincyClient::process_outbound_traffic, which is, again, suspicious.

The CPU usage on the Server side is above 90 %, on the Client side only ~ 70 %.

Server flamechart: c2s-server

Client flamechart: c2s-client

Initial conclusions

It seems that the CPU usage on the receiving side is quite high, and that the receiving side spends more time in their respective process_inbound_traffic methods, which is highly suspicious (most of the data transfered should be handled by the respective process_outbound_traffic methods, at least that is my initial assumption).

Further investigation will be needed as to where Quincy client and server spend too much time.

TODO

M0dEx commented 1 year ago

https://tailscale.com/blog/throughput-improvements/

and

https://tailscale.com/blog/more-throughput/

might be useful in regards to optimizing TUN performance, which seems to be problem at the moment (a lot of time spent in poll_write for the TUN interface).

The changes Tailscale made to wireguard-go are available here: https://github.com/WireGuard/wireguard-go/blob/master/tun/tcp_offload_linux.go

M0dEx commented 1 year ago

Different MTUs

With an MTU of 6000, the throughput nearly triples, to about ~ 3 Gbps regardless of data transfer direction.

From the flamecharts, it is clear that more CPU time is spent encrypting the packets, but most of the time is still spent in poll_write for the TUN interfaces.

The CPU usage also decreased to about 60 - 70 % on both Server and Client.

Server -> Client

Server flamechart: s2c-6000-server

Client flamechart: s2c-6000-client

Client -> Server

Server flamechart: c2s-6000-server

Client flamechart: c2s-6000-client

M0dEx commented 7 months ago

GSO/GRO support is work-in-progress: https://github.com/ssrlive/rust-tun/pull/45

frankozland commented 1 month ago

related? https://users.rust-lang.org/t/zero-copy-async-io-in-rust/106996/3

M0dEx commented 1 month ago

I can try if io_uring would improve the performance on Linux; it might require some nontrivial changes in the TUN library, but it could be worth a shot.