katzenpost / mixnet_uprising

repository for tracking open tasks
18 stars 1 forks source link

procure servers for mixnet testnets #75

Open david415 opened 6 years ago

david415 commented 6 years ago

we need servers to test on.

mixmasala commented 6 years ago

On greenhost I have 11 lightweight vps nodes in AMS and HKG: 1 core, 512MB ram for 3 authorities, 2 core 1GB ram for 6 mixes, 2 core 3GB ram for providers. I have another fatter node for running kimchi but I go over my credit allotment when I start it. All instances have minimal disk allotments.

I have a few open questions that need resolving though: How much disk space to provision for providers? Consider testing with 1000's of test clients. Will we hit cpu/memory limits before network saturation. Expectation: yes How lousy are multitenant VPS's for doing actual measurements on? How long will it take to develop stress/load tests? Expectation: A few weeks How do we do larger scale tests (1000's of nodes, millions of users)?

I can ask for more greenhost credits, maybe they will give them to us, we can spin up fatter nodes for running tests. Currently my greenhost account has a 100 credit limit, which is supposedly equivalent to 100 EUR/month. I would suggest that if we are going to actually buy/lease hardware we should get something monstrous with shitloads of cores and ram to run larger simulations (a few racks). Preferentially I think we can do a lot of this type of testing with virtual infrastructure on-demand via API and not buy anything until it's clearly cost advantage to do so. To that end I can look into Greenhost's API and am open to other providers that have some kind of API for instantiating VPS instances or setting up openstack? on physical machines.

For a running demonstration network that we maintain availability of I think something spec'd similarly to my greenhost deployment will work fine and shouldn't cost more than about 100 EUR/month.

For running simulation networks at larger scale - I'd need to look into spot pricing to get some rough idea of how much these tests would cost per run. e.g. 1000 nodes @ $.15/hr * 6 hours = $900

moba commented 6 years ago

I wonder how many different machines a testbed of a thousand nodes actually needs. My feeling is that for a large-scale test, one beefy machine with docker/libvirt isolation makes the most sense and is the cheapest. For other tests, where geographical diversity and real network congestion etc play a role, I am afraid to say that I would go with a cloud provider. Does not have to be AWS.

There's also PlanetLab, which was built "exactly for our use case". The hurdle is that it needs to go via researchers, which while more difficult to arrange could actually be quite beneficial in terms of results (a nice academic paper) and long-term research relationship. I think Ian Goldberg has good contacts there. Probably GDanezis knows also some people we could ask.

david415 commented 6 years ago

@mixmasala I presume that providers using boltdb backend will hit lock contention long before lack of cpu/memory. The server is tuned to work at gigabit ethernet speeds. What kind of network link where you thinking of saturating? Unfortunately you did not specify so I presume gigabit in which case you'd be incorrect.

mixmasala commented 6 years ago

We shall see where the bottlenecks are when we run the tests. I did not specify but did assume gigabit links on all the test nodes.

mixmasala commented 6 years ago

I used to have planetlab access, fyi there are some deployment restrictions (how applications have to be packaged, etc, and what kinds of data can/cannot be collected - human interactions)