Closed timokau closed 4 years ago
Can you try with 0.5.1dev?
Roughly the same picture with 0.5.1dev:
=================================== FAILURES ===================================
______________________________ [doctest] util.pyx ______________________________
144
145 :param float_type: one of 'double', 'long double', 'dpe', 'dd', 'qd' or 'mpfr'
146 :returns: precision in bits
147
148 This function returns the precision per type::
149
150 >>> import fpylll
151 >>> from fpylll import FPLLL
152 >>> FPLLL.get_precision('double')
153 53
Expected:
64
Got:
113
/build/source/src/fpylll/util.pyx:153: DocTestFailure
______________________________ [doctest] gso.pyx _______________________________
554
555 @property
556 def float_type(self):
557 """
558 >>> from fpylll import IntegerMatrix, GSO, FPLLL
559 >>> A = IntegerMatrix(10, 10)
560 >>> M = GSO.Mat(A)
561 >>> M.float_type
562 'double'
563 >>> FPLLL.set_precision(100)
Expected:
53
Got:
100
/build/source/src/fpylll/fplll/gso.pyx:563: DocTestFailure
_____________________________ [doctest] pruner.pyx _____________________________
363 >>> _ = M.update_gso()
364
365 >>> pr = Pruning.Pruner(0.9*M.get_r(0,0), 2**40, [M.r()], 0.51, metric=Pruning.PROBABILITY_OF_SHORTEST)
366 >>> c = pr.optimize_coefficients([1. for _ in range(M.d)])
367 >>> pr.measure_metric(c) # doctest: +ELLIPSIS
368 0.00271195...
369
370 >>> pr = Pruning.Pruner(0.9*M.get_r(0,0), 2**2, [M.r()], 1.0, metric=Pruning.EXPECTED_SOLUTIONS)
371 >>> c = pr.optimize_coefficients([1. for _ in range(M.d)])
372 >>> pr.measure_metric(c) # doctest: +ELLIPSIS
Expected:
0.99051765...
Got:
0.9905176601013412
/build/source/src/fpylll/fplll/pruner.pyx:372: DocTestFailure
Last one is easy to fix, first one seems to be a wrong assumption on our part (that long double always has 64 bits of precision). No idea about the second one. I pushed fixes for 1 and 3, not sure what to do about 2 without access to an aarch64 box.
With those two patches the tests succeed on aarch64 (I ran the test suite 10x). Maybe the second failure was some fluke of the test runner caused by the first failure.
Thanks a lot for looking into this!
On a large aarch64 machine (64 CPUs, 126G of RAM), I get reproducible precision-related test failures: