katzenpost / mixnet_uprising

repository for tracking open tasks
18 stars 1 forks source link

performance load testing #81

Open david415 opened 5 years ago

david415 commented 5 years ago

perhaps we should use a prometheus statistics agregator via an optional build tag. statistics we are interested in would be the rate of packet drops for each of the three AQMs in the mix server (ingress, mix and egress). number of packets dwelling in each queue.

however without any code changes we can at least send packets through a test mixnet at a very high rate and measure CPU load and memory usage. therefore one task is to write a client for load testing that sends sphinx packets are a tuneable rate.

the goal of such tests should be to determine if there are obvious bugs and to find the performance limits. fixing these bugs and tuning the server to withstand high throughput is exactly what is needed for real world deployments.

david415 commented 5 years ago

version 0 of the load testing mixnet client should be designed as follows:

This client offers the following tuning parameters:

  1. token per time duration
  2. max tokens per bucket
  3. size of buffered channel connecting sphinx packet creators with sender thread

This essentially enforces a specified send rate with some capability to smoothly handle pauses and bursts if tuned correctly.

Testing Environment

There are a number of tuning parameters the mix server has. We will want to be very careful when tuning however we should be able to collect specific metrics to help guide the tuning.

The Sphinx packets should be addressed to the destination Provider's "loop" service. The ingress Provider MUST have rate limiting disabled! The Sphinx packets will not include a SURB and thus the loop service will drop the packet as soon as it's received. Each machine in the path should be Katzenpost running on bare metal in order to inform us how well the software performs on bare metal.

During the run of such a load test we should record internal mix server metrics using Prometheus. In particular we are very interested to learn about the follow metrics: