Closed WeiViming closed 2 years ago
I think it might be because of the datatype and overflow issues. Is myType set to 32 bits? If that is the case, then numbers such as 300 might be "too large", try changing myType to uint64_t and see if the computation works.
Edit: I just realized that this is already 64 bits. What is the precision (FLOAT_PRECISION) value? There is definitely some overflow happening.
If we increase the FLOAT_PRECISION
from 13 to 14 and more, we will get an error result, see
a: 4915200 4194304 4177920
b: 4915200 4194304 4177920
Quot: 70338025881600 70239236653056 70228182220800
Quotient[0]: 4.29309e+09
Quotient[1]: 4.28706e+09
Quotient[2]: 4.28639e+09
But if we decrease it to 12, we will get a good result like this:
a: 1228800 1048576 1044480
b: 1228800 1048576 1044480
Quot: 4094 4088 4087
Quotient[0]: 0.999512
Quotient[1]: 0.998047
Quotient[2]: 0.997803
FLOAT_PRECISION=13
, the test above gives different results when data_a[2]=255, data_b[2]=255
. But I only change the data_a[0]
and data_b[0]
, Why? About the first comment, I think the best approach is to understand the set of operations/MPC protocols happening on the data to get an idea of largeness of numbers. Assuming 64-bit numbers with 13-bits of precision, for the power function over the floating value 255 is going to result in 8 + 13 = 21. This is directly related to the truncation later on roughly equal to 21*2 - 13 = 29-bits. Such an analysis will give you an idea of how large numbers can be.
About the second question, I'm not sure what you're asking. Is data_a[0]
and data_b[0]
same as data_a[2]
and data_b[2]
but the result is different?
For the second question, the first test data_a[2]=data_b[2]=256
gives us 0.993286
, but the second gives -0.997925
. The only different about those tests is that the second test change the data_a[0], data_b[0]
.
It sounds like some minor mistake, make sure you're running the right numbers (your first post indicates data_a[2]=255 and not 256). Try to work out a minimal error example, have data_a to be an array with a single element and reproduce your error.
Sorry for my poor explanation.
Actually, I mean that if the first element of data_b
is overflow and gets a bad result, the second element of data_b
will get the bad result even though the second element does not overflow. I run the some tests as follows:
#define myType uint64_t
#define FLOAT_PRECISION 13
// test 1
vector<myType> data_a = {floatToMyType(255)},
data_b = {floatToMyType(255)};
// return 0.993286
// (Good)
// test 2
vector<myType> data_a = {floatToMyType(256)},
data_b = {floatToMyType(256)};
// return -0.998047
// (Bad, as you say this is because 256 maybe too large)
// test 3
vector<myType> data_a = {floatToMyType(255),floatToMyType(255)},
data_b = {floatToMyType(255),floatToMyType(255)};
// return 0.993286 0.993286
// (Both Good.)
// test 4
vector<myType> data_a = {floatToMyType(256),floatToMyType(256)},
data_b = {floatToMyType(256),floatToMyType(256)};
// return -0.998047 -0.998047
// (Both Bad. Beacuse both data_b's elements are overflow.)
// test 5
vector<myType> data_a = {floatToMyType(255),floatToMyType(255)},
data_b = {floatToMyType(255),floatToMyType(256)};
// return 0.993286 0.988647
// (Both Good, even though the second element of data_b is overflow)
// test 6
vector<myType> data_a = {floatToMyType(255),floatToMyType(255)},
data_b = {floatToMyType(256),floatToMyType(255)};
// return -0.994141 -0.997925
// (Both Bad.)
As you see, the difference between test 5 and test 6 is whether the first element of data_b
is overflow or not. My question has listed as follows:
data_b
is overflow, still gets a good result. Why?data_b
is overflow, but the second is not, still gets a bad result. Why?It seems that the first element of data_b
overflow or not will impact the other elements. If the first element is overflow, even though the second is not, we will still get a bad result. This is the reason why I am confused about this part. Hope you can understand my poor English. Thank you.
Thanks for the detailed comment and I think I know exactly what is causing this bug. It was something I was going to implement at some point but never got around to it.
In the division function, this line has to be vectorized. Originally I think the function funcPow
was not vectorized either but now it seems to be so the changes will be confined to the funcDivision
protocol. In particular, this set of lines will have to be "vectorized" i.e., the precision variable from line 1509 will have to be a vector of values and the following code correspondingly modified.
So for your questions, since the current code uses the power computation from the first value for the entire vector, the function will work only when the data_b
for the first vector entry is fine and the same as the rest of the vector entries.
About the sign, I think it might just be related to the overflow (since you're reaching the limits of the bit-size). My hypothesis, if you raise the value to 512 or so, it might flip signs but if you leave it at 511 or something it might be negative. Either way, I would not read too far into the sign, the computation at those values will be incorrect anyways.
Hi, @snwagh . The
funcDivsion
functionality computesc=a/b
. After increasing the bit-width by changing to uint64_t, I try to run some tests inDebugdivision
code as followsvoid debugDivision() { vector<myType> data_a = {floatToMyType(255), floatToMyType(256), floatToMyType(255)}, data_b = {floatToMyType(255), floatToMyType(256), floatToMyType(255)}; size_t size = data_a.size(); RSSVectorMyType a(size), b(size), quotient(size); vector<myType> reconst(size); funcGetShares(a, data_a); funcGetShares(b, data_b); funcDivision(a, b, quotient, size); #if (!LOG_DEBUG) funcReconstruct(a, reconst, size, "a", true); funcReconstruct(b, reconst, size, "b", true); funcReconstruct(quotient, reconst, size, "Quot", true); print_myType(reconst[0], "Quotient[0]", "FLOAT"); print_myType(reconst[1], "Quotient[1]", "FLOAT"); print_myType(reconst[2], "Quotient[2]", "FLOAT"); #endif }
It returns as follows:
a: 2088960 2097152 2088960 b: 2088960 2097152 2088960 Quot: 8137 8131 8137 Quotient[0]: 0.993286 Quotient[1]: 0.992554 Quotient[2]: 0.993286
It seems good.
But if I change the first element of
data_a
anddata_b
as follows:vector<myType> data_a = {floatToMyType(300), floatToMyType(256), floatToMyType(255)}, data_b = {floatToMyType(300), floatToMyType(256), floatToMyType(255)};
I got this error result:
a: 2457600 2097152 2088960 b: 2457600 2097152 2088960 Quot: -8188 -8176 -8175 Quotient[0]: -0.999512 Quotient[1]: -0.998047 Quotient[2]: -0.997925
I am so confused about the result. Is anything wrong with my understanding?
@WeiViming I'd like to contact you for some questions,Is it convenient to leave your contact information? My email: muou55555@163.com
Thank you for your reply.
Hi, @snwagh . The
funcDivsion
functionality computesc=a/b
. After increasing the bit-width by changing to uint64_t, I try to run some tests inDebugdivision
code as followsIt returns as follows:
It seems good.
But if I change the first element of
data_a
anddata_b
as follows:I got this error result:
I am so confused about the result. Is anything wrong with my understanding?