As introduced by Nitin in our meeting on 23 May, a new architecture (/protocol?) is proposed for performing MPC on distributed settings (e.g. Solid-like), calling it PPC evaluating. We are doing benchmarks (based on https://github.com/data61/MP-SPDZ) to obtain empirical evidence to learn:
How well does the new architecture scale with the increased number of participants?
What is the bottleneck of the architecture?
Two main settings:
Homogeneous nodes with same hardware and network performance -- directly by the library, network controlled by tc.
Heterogeneous nodes with different hardware and network performance -- on different Docker containers, with different network (interfaces) controlled by tc.
Rui has managed to run the tutorial code on a number of encrypted data files (on a local server) and started to experiment with the set-up of a different number of nodes for each operation for benchmarking
@@ACTION to Rui: to clarify with Nitin 1) how the file encryptions were done and whether it should be part of the experiments, which means the encryptions should be done on the fly; 2) whether we need to store all the data files in separate computational nodes for a real decentralised experiment or not.
@@ACTION to Rui: to clarify with Nitin how may non-numerical operations be supported.
As introduced by Nitin in our meeting on 23 May, a new architecture (/protocol?) is proposed for performing MPC on distributed settings (e.g. Solid-like), calling it PPC evaluating. We are doing benchmarks (based on https://github.com/data61/MP-SPDZ) to obtain empirical evidence to learn:
Two main settings:
tc
.tc
.