What happens to the time it takes to run the hashgraph algorithm as the number of nodes increase. This could be a JMH benchmark and include no networking at all. Just feed the hashgraph algorithm events from N number of "fake" nodes and see how long it takes to come to consensus.
If I have N number of nodes and C number of connections per node, what happens as C decreases? Suppose I have 26 nodes with a connection count of 26. Now with a connection count of 25. Then 24, and so on down to 1. What happens to the latency?
Do a JRS test that tests up to fully connected 128 nodes
That should give us a pretty good idea for what will happen as we increase the number of nodes. Maybe we will find that the hashgraph algorithm can only handle 40 nodes before latency goes crazy, or maybe 400, or maybe 40,000. That gives us the upper bound if networking is free. The second test shows us what happens to latency as indirection increases. Assuming Amazon's limit is 40 outgoing persistent connections, then the only way to get past 40 nodes is to stop using a fully connected network.
I think we need to get a couple data points:
That should give us a pretty good idea for what will happen as we increase the number of nodes. Maybe we will find that the hashgraph algorithm can only handle 40 nodes before latency goes crazy, or maybe 400, or maybe 40,000. That gives us the upper bound if networking is free. The second test shows us what happens to latency as indirection increases. Assuming Amazon's limit is 40 outgoing persistent connections, then the only way to get past 40 nodes is to stop using a fully connected network.