Closed ndesmoul closed 2 months ago
What OS, CPU and compiler do you use?
What header do you include?
If you comment out G1::setMulVecOpti
, does it run well? Then, the optimized mulVec may be wrong.
There is no computeDigest
. Please show me the complete code.
The CPU: Intel(R) Xeon(R) CPU E5-1620 v2 @ 3.70GHz OS: Archlinux I include the following header:
#include <mcl/bls12_381.hpp>
using namespace mcl::bls12;
When I want to test with bigger curves ( BLS12_461) I do this:
#include <mcl/bn512.hpp>
using namespace mcl::bn;
At least it worked before.
Here is the code of computeDigest:
Fr computeDigest(const G1& W, const vector<BYTE>& mess) {
SHA_256 hashFunc;
//PPassPublicParameters::hashUpdatePoint(hashFunc, W);
size_t sizeBuf = (BN::param.p.getBitSize() + 7) / 8;
vector<uint8_t> buf(sizeBuf);
G1 ptAff;
size_t le;
// hash (W_x||W_y)
G1::normalize(ptAff, W);
le = ptAff.x.serialize(&buf[0], sizeBuf);
hashFunc.update(&buf[0], le);
le = ptAff.y.serialize(&buf[0], sizeBuf);
hashFunc.update(&buf[0], le);
hashFunc.update(&mess[0], mess.size());
vector<BYTE> ct_str(32);
unsigned int hash_le = hashFunc.digest(&ct_str[0]);
Fr r;
r.setBigEndianMod(&ct_str[0], hash_le);
return r;
}
It's just H(W_x||W_y, message) with result converted into a Fr.
But I use an independent SHA256 implementation and not the internal hash function provided by MCL.
Note that in that case, to test you may use a fixed value for the returned value of computeDigest. The real problem is that for some reason I have W != Wt with BLS12-381 curve.
G1 Wt2 = g s - bigY r;
Wt2 get the same value than Wt (so different than W). So G1::mulVec may not be the problem here.
It tried the same code on another computer. This time the processor is an Intel core i7 running Ubuntu. And everything seems to run fine.
So the problem may be related to the processor architecture (Intel Xeon). Older version of MCL (tag 1.80) still runs fine with it.
So the problem may be related to the processor architecture (Intel Xeon). Older version of MCL (tag 1.80) still runs fine with it.
Thank you for the information.
If you comment out G1::setMulVecOpti, does it run well? Then, the optimized mulVec may be wrong.
Can you try this in a non-working environment to see if it will work correctly?
I have commented out the line 2308 and recompiled the library:
// G1::setMulVecOpti(mcl::msm::mulVecAVX512);
But it doesn't change anything.
That's G1 multiplication that seems faulty. I fixed some exponent and point.
With the following configuration: initPairing(mcl::BLS12_381); Fp::setETHserialization(true); Fr::setETHserialization(true); mcl::bn::setMapToMode(MCL_MAP_TO_MODE_HASH_TO_CURVE);
With tag 1.80: g = 861cf01e6ca788f63ba0c0c1831fb52510d4b4d7292dc2756e1ad6c4b7e940b62d78edd6389c37832bd3d385f4653577 x = 0fff123456789abcdeffff123456789abcdef123456789abcdef123456789eee Y = g^x = b734da095c7bd5b22613d8df7672ad70e591390e1242418a1e4b177310b3d97535209bf8b4af883639a8f4a211e452c9
With last git version: g = 861cf01e6ca788f63ba0c0c1831fb52510d4b4d7292dc2756e1ad6c4b7e940b62d78edd6389c37832bd3d385f4653577 x = 0fff123456789abcdeffff123456789abcdef123456789abcdef123456789eee Y = g^x = b5e66c1f2df2d2f065794412ec4f685e513245fac840689d85891c8d4372c203575824543733a53fb0eda759d1168d22
Y value is incorrect with last version.
Thanks for the valuable report. The code for CPUs older than Broadwell seems to need to be corrected.
// Broadwell emulation by Intel SDE
mcl% sde -bdw -- ./a.out
P=861cf01e6ca788f63ba0c0c1831fb52510d4b4d7292dc2756e1ad6c4b7e940b62d78edd6389c37832bd3d385f4653577
x=fff123456789abcdeffff123456789abcdef123456789abcdef123456789eee
Q=b734da095c7bd5b22613d8df7672ad70e591390e1242418a1e4b177310b3d97535209bf8b4af883639a8f4a211e452c9 // good
// Haswell emulation
mcl% sde -hsw -- ./a.out
P=861cf01e6ca788f63ba0c0c1831fb52510d4b4d7292dc2756e1ad6c4b7e940b62d78edd6389c37832bd3d385f4653577
x=fff123456789abcdeffff123456789abcdef123456789abcdef123456789eee
Q=b5e66c1f2df2d2f065794412ec4f685e513245fac840689d85891c8d4372c203575824543733a53fb0eda759d1168d22 // bad
I'll check it.
Could you try fix_split_on_old_cpu?
OK. I tried the fix_split_on_old_cpu branch.
Indeed, BLS12-381 curve works fine with it (as are the others curves I have tested).
Hi,
With last version (pull the 22/04/2024) of the library I have some issues when using BLS12-381 curve, as all my tests suddenly fails. No crash, some of the computation just seems invalid.
As example, the code below, a simple EC-SDSA (EC-Schnorr) implementation, worked before with every curves I have tested (BN_256, NIST_P256, BLS12-381, BLS12_461, ...). But now the generated signature is invalid only with BLS12-381 (meaning it still works with the other curves).