data61 / MP-SPDZ

Versatile framework for multi-party computation
Other
895 stars 278 forks source link

Modifying SecureNN Benchmarks #23

Closed mayank0403 closed 4 years ago

mayank0403 commented 5 years ago

Is there a straightforward way to add ReLU layer and 64-bit integer quantization to the example file MP-SPDZ/Programs/Source/benchmark_secureNN.mpc?

In this file, the 4 SecureNN neural networks are written, but I don't see a ReLU activation between different layers. Is ReLU being simulated in some way?

I observed that the file runs 8-bit quantized networks. It is written in these lines:

p1 = squant_params(sfloat(.001), sint(1), 8)
p2 = squant_params(sfloat(.002), sint(2), 8)
p3 = squant_params(sfloat(.003), sint(3), 8)

If I change the last argument to 64, does it make the quantization 64-bit?

mkskeller commented 5 years ago

Generally, note that the file is a simple benchmark rather than anything near an implementation of SecureNN.

Is there a straightforward way to add ReLU layer and 64-bit integer quantization to the example file MP-SPDZ/Programs/Source/benchmark_secureNN.mpc?

I haven't implemented comparison for quantized number, so this requires accessing the internal representation, along the lines of relu = lambda x: (x.v < x.Z).if_else(x.Z, x.v), but that doesn't returns the quantized integer rather than a new squant, so more work is needed there.

In this file, the 4 SecureNN neural networks are written, but I don't see a ReLU activation between different layers. Is ReLU being simulated in some way?

No, the benchmark is run without ReLU.

I observed that the file runs 8-bit quantized networks. It is written in these lines:

p1 = squant_params(sfloat(.001), sint(1), 8)
p2 = squant_params(sfloat(.002), sint(2), 8)
p3 = squant_params(sfloat(.003), sint(3), 8)

If I change the last argument to 64, does it make the quantization 64-bit?

In theory this can be replaced, but I haven't tested it. It would require a much larger field or ring than the default for sure.

mayank0403 commented 4 years ago

Thanks a lot for your prompt previous comment. A last clarification before I close the issue. You said that you haven't implemented comparison for quantized number yet, then how is the MaxPool in these SecureNN benchmark files working? Is the maxpool function not doing comparisons and only picking up some elements to match the output dimension size of MaxPool layer? Is there any "computation" going on inside MaxPool, is what I wanted to ask.

mkskeller commented 4 years ago

We haven't implemented comparison for quantized numbers but for integers, which comes at roughly the same cost. maxpool(l, n) computes the oblivious selection of the maximum of l integers n times in parallel. I agree that ReLU could be simulated similarly, but we seem to have missed this.

mkskeller commented 4 years ago

For clarification, I would like to add that sfix provides a special case of quantization where the scale is a power of two and zero point is 0. sfix is much further developed (including comparisons and division), and it has been tested with various precisions. You can use sfix.set_precision(f, k) where f stands for the precision after the point (or a scale of 2^-f) and k stands for the total precision, that is using k-bit integers internally or representing numbers up 2^(k-f).

mkskeller commented 4 years ago

I have realized that ReLU is implicitely done as part of quantization clamping, that is making sure that the output is within the range. The code is here: https://github.com/data61/MP-SPDZ/blob/5f0a7ad8e3dd7a86c8c9b9f32d65efc07691beb0/Compiler/types.py#L2654 Note that squant.clamp is true by default.

mayank0403 commented 4 years ago

I see.

Thank you for the clarification.