ethresearch / sharding-p2p-poc

Proof of Concept of Ethereum Serenity Peer-to-Peer Layer on libp2p PubSub System
40 stars 19 forks source link

Testing Methodology #43

Open zscole opened 6 years ago

zscole commented 6 years ago

Overview The network is segmented into X number of shards. Every ~10 minutes, validators are randomly assigned to a shard, so the stress point is observing and testing the ability of validators to subscribe to new topics and send/receive messages pertaining to this new topic in an adequate amount of time.

Test Utility We will perform tests using the Whiteblock testing platform. The following functionalities would likely be most relevant to this particular test series:

Test Scenarios

Need to Define

Other Notes

mhchia commented 6 years ago

Good work! I think it makes sense. Thank you a lot for this document

Some questions:

Test Utility

Test Scenarios

Need to define

Configuration specifications for relevant test cases to create templates which allow for quicker test automation.

Sorry I don't get what you mean here. Can you elaborate more?

Code which should be tested.

My thought is to test with the docker image. So there is one ./sharding-p2p-poc process running inside one docker instance. We just use the CLI commands to control it.

Preliminary testing methodology should be established based on everyone's input. We can make adjustments to this methodology based on the results of each test case. It's generally best (in my experience) to create a high-level overview which provides a more granular definition of the first three test cases and then make adjustments to each subsequent test series based on the results of those three.

Agree. Currently, we use CLI to communicate with the running nodes. From the results we see what happens after every command. My thought is to change the result to be easier parsing, and therefore construct tests in shell scripts in the first step. Do you have any idea on any testing methodology in our case?

btw. I modified your comment to fix the content of the URL of WhiteBlock.

zscole commented 6 years ago

Do we need to modify our code in order to do the emulation?

No, our platform is meant to accomodate for code. The idea is that code shouldn't need to accomodate for the platform. As long as it functions, we should be able to just throw it on the platform and get it going.

What do you mean by "Automated provisioning of nodes"?

The ability to deploy a specified number of fully configured nodes based on the Dockerfile and apply some sort of scheduling logic if needed to this process. Our platform functions similarly to Kubernetes and other orchestration/infrastructure automation utilities. We developed a custom wrapper for Docker that was purpose built for blockchain specific containerization.

We might also need to consider the network topology? Though at the beginning we can just spin up multiple bootnodes, and let the other nodes bootstrap through the bootnodes.

Can you please clarify your definition of network topology in this context? Part of the node deployment module includes the ability to assign nodes to independent VLANs and assigning IP addresses. This provides the ability to configure and control links between nodes (either individual links or the entirety of the network), such as implementing varying degrees of latency, bandwidth, packet loss, etc.

Sorry I don't get what you mean here. Can you elaborate more?

We can establish an automated workflow kind of like you would using Chef. Once we flush out definitions for each test case within the series, we can use these definitions to create a configuration file which allows for scheduling and automation.

Do you have any idea on any testing methodology in our case?

Let's talk about this offline so we can brainstorm!

mhchia commented 6 years ago

Can you please clarify your definition of network topology in this context?

I mean the topology of our overlay network, instead of the on in TCP/IP layer. We might want to test

We can establish an automated workflow kind of like you would using Chef. Once we flush out definitions for each test case within the series, we can use these definitions to create a configuration file which allows for scheduling and automation.

Get it. It makes sense.

Thank you for the detailed explanation!