encryptogroup / MOTION2NX

A framework for generic hybrid two-party computation and private inference with neural networks
MIT License
30 stars 16 forks source link

ABY2.0 examples #2

Open lu562 opened 2 years ago

lu562 commented 2 years ago

Hi,

Thanks for providing this library! and I have some questions:

(1) As mentioned this repo provides some implementation of ABY2.0 protocols. However, I'm not able to find some example code that uses ABY2.0 protocols, may I know how to use them or benchmark them? especially, I may want to test the performance of the secure comparison protocol(Bit extraction) of ABY2.0.

(2) As ABY is supported by the MOTION, I think additive secret sharing is supported, right? I assume if I initialize an arithmetic sharing, it will be additive sharing, is it the case?

(3) For arithmetic sharing, is there an API such that parties can provide their share as input? (such API in ABY is called PutSharedINGate()) e.g. If we assume additive secret sharing, where the secret is 10, party 0 has share value 3 and party 1 has share value 7, so that 3+7=10. Is there an API such that there is a share and party 0 can take 3 as its input and party 1 can take 7 as its input?

Thank you! Looking forward to your reply!

lenerd commented 2 years ago

Hi,

(1) Current, not all protocols presented in the ABY2.0 paper are implement here, and we have unfortunately not yet an implementation of bit extraction. We have the primitive operations for hybrid circuits (Share, Reconstruct, XOR, AND, Addition, Multiplication, Square, Bit-Integer Multiplication, and Conversions), as well as several tensor operations for neural networks. In the source code, they are still referred to as BEAVY / beavy since that was a working title of ABY2.0 before publication.

(2) Yes, additive secret sharing (arithmetic GMW) is implemented. What kind of sharing you get depends on whether you call the make_arithmetic_$bitlen_input_gate_{my,other} method of the GMWProvider or the BEAVYProvider. I will try to create a brief tutorial soon-ish.

(3) There is currently no extra API for this (we should add one, though :), but in principle it would be sufficient to create the corresponding ArithmeticGMWWire<T> objects. You can then combine them the usual way. When you have the shares, you can store them in the wires and mark them as ready.

lu562 commented 2 years ago

Thank you for the reply!

lu562 commented 2 years ago

I see that there are some example codes for neural network training, may I know if the training is done without secure comparison protocols? May I know a little more about how the activation function is implemented? Thanks in advance :)

lenerd commented 2 years ago

Currently we have only neural network inference, but no protocols for training.

For the ReLU activation, different protocols are implemented which basically multiply/AND the value with the inverse of the most significant bit denoting the sign. All require that the input is already available as a Boolean sharing (which is for example usually the case if we computed a MaxPool operation before).

Comparisons are also used in the MaxPool implementation:

The circuits are either given in circuits/int/ or implemented in the CircuitLoader class. The implementation of all tensor operations can be found in src/motioncore/protocols/<protocol>/tensor_op.{h,cpp}.

Srish-4 commented 1 year ago

Hi,

Speaking of Maxpool operation using Boolean ABY2.0 , its not giving accurate result.Always gives the 0th index of the vector provided.Yao implementation works fine. Please share your insights , why this is the case.

Thanks in advance !