When giving noisy hints with very small error, or modular hints with very large modulus, we get the absurd result that these hints would be better than a perfect hint.
This is not an implementation bug, but a framework bug.
In the real world, the lattice gets so distorted in one direction that the only one hyperplane interesect the BDD ellipsoid, effectively reducing the dimension. But with the current system, the whole volume keeps increasing as if it would still help to push further other hyperplanes further the from the ellipsoid.
Some example
load("../framework/instance_gen.sage")
n = 80
q = 997
D_s = build_centered_binomial_law(4)
A, b, dbdd = initialize_from_LWE_instance(DBDD_predict_diag, n, q, n, D_s, D_s, verbosity=2)
v = vec([1]+(2*n-1)*[0])
dbdd.estimate_attack()
dbdd.integrate_approx_hint(v,dbdd.leak(v), 1e-15)
dbdd.integrate_perfect_hint(v,dbdd.leak(v))
dbdd.estimate_attack()
print("**************")
D_s = build_centered_binomial_law(4)
A, b, dbdd = initialize_from_LWE_instance(DBDD, n, q, n, D_s, D_s, verbosity=2)
v = vec([1]+(2*n-1)*[0])
dbdd.estimate_attack()
dbdd.integrate_modular_hint(v,dbdd.leak(v), 1000000000000000)
dbdd.integrate_perfect_hint(v,dbdd.leak(v))
dbdd.estimate_attack()
output:
Build DBDD from LWE
n= 80 m= 80 q=997
Attack Estimation
ln(dvol)=496.9282872 ln(Bvol)=552.3800616 ln(Svol)=110.9035489 δ(β)=1.013368
dim=161 δ=1.013389 β=26.42
integrate approx hint (conditionning) u0 = -1 + χ(σ²=0.000) Worthy hint ! dim=161, δ=1.01404701, β=20.98
integrate perfect hint u0 = -1 Unworthy hint, Forced it.
Attack Estimation
ln(dvol)=497.2748607 ln(Bvol)=552.3800616 ln(Svol)=110.2104017 δ(β)=1.013544
dim=160 δ=1.013539 β=24.20
**************
Build DBDD from LWE
n= 80 m= 80 q=997
Attack Estimation
ln(dvol)=496.9282870 ln(Bvol)=552.3800615 ln(Svol)=110.9035489 δ(β)=1.013368
dim=161 δ=1.013389 β=26.42
integrate modular hint (smooth) u0 = 0 MOD 1000000000000000 Worthy hint ! dim=161, δ=1.01485772, β=14.96
integrate perfect hint u0 = 0 Unworthy hint, Forced it.
Attack Estimation
ln(dvol)=497.2749068 ln(Bvol)=552.3801077 ln(Svol)=110.2104017 δ(β)=1.013544
dim=160 δ=1.013539 β=24.20
When giving noisy hints with very small error, or modular hints with very large modulus, we get the absurd result that these hints would be better than a perfect hint.
This is not an implementation bug, but a framework bug.
In the real world, the lattice gets so distorted in one direction that the only one hyperplane interesect the BDD ellipsoid, effectively reducing the dimension. But with the current system, the whole volume keeps increasing as if it would still help to push further other hyperplanes further the from the ellipsoid.
Some example
output: