Closed Dioprz closed 1 month ago
Implemented in 77a30fc. Tests are working as expected now.
What should the comment say, exactly? Comment and better test naming is all that rest for this draft I think
The comment could be:
The CryptographicEstimators finds (slightly) better parameters which lead to slightly better timings in case of Both-May and May-Ozerov. But given the same parameters the CryptographicEstimatos and SyndromeDecodingEstimators compute the same expected runtime.
EDIT: FIXED with @Javierverbel
~Once the error described in https://github.com/Crypto-TII/CryptographicEstimators/pull/183 was fixed, two problems have arisen:~
~1. The actual output produced by the Dumer
and Stern
internal estimators are much further from their expected output than the 0.01
of tolerance used by the rest of the estimators in ext_estimators_1
, causing the tests to fail. The other internal estimators works well with this epsilon/tolerance.~
~2. I had to disable the ext_estimates_with_prange()
because the framework only supports single numeric values as expected outputs, and this one produces a dictionary.~
~Temporary and proposed solutions:~
~1. For this situation, I have:~
~1. Moved those estimators into a new ext_sd
function, ext_estimators_3
.~
~2. Compared those expected outputs using a higher epsilon of 0.5
(calculated by using an even higher epsilon of 10 and looking at the prints produced with the pytest -s tests/test_kat.py
command. I sent the modified test_kat
file with prints so you can reproduce this).~
~3. I expect your input on how to handle this situation, and if any other clarification/solution can be made.~
kat_test
function in test_kat.py
, and we gracefully handle the situation in that function.So, for example:
At tests/external_estimators/ext_sd.py
, from the ext_estimates_with_prange
function we serialize the dictionary:
expected_outputs = [
{
"Prange": {
"additional_information": {
"gauss": 10.929258408636972,
"permutations": 2.014646775964401,
},
"estimate": {
"memory": 12.688250309133178,
"parameters": {"r": 4},
"time": 19.587761374376097,
},
}
}
]
At tests/internal_estimators/sd.py
we make:
def estimates_with_prange(input: tuple, epsilon: int = 0, compare_by: str):
n, k, w, excluded_algorithms_names = input
excluded_algorithms = list(
map(lambda x: getattr(sys.modules[__name__], x), excluded_algorithms_names)
)
actual_complexity = SDEstimator(
n, k, w, excluded_algorithms=excluded_algorithms
).estimate()
compare_by = "equality"
return actual_complexity, epsilon, compare_by
And at test_kat.py
:
def kat_test(expected_output: Any, actual_output: Any, epsilon: float, compare_by: str):
if compare_by == "less_than_epsilon":
assert abs(expected_output - actual_output) <= epsilon
elif compare_by == "equality":
assert expected_output == actual_output
elif compare_by == "OR":
assert True in [abs(expected_output - possible_output) <= epsilon for possible_output in actual_output]
else:
raise ValueError("Invalid comparison method")
So now we can compare dictionaries, strings, and whatever else (and with the last elif, handle the situation at #181).
Notice that making advances in this direction also makes it possible that, now or in the future, we would be able to migrate the long doctests into the framework, improving even more the CI times and the overall code quality.
~Let me know what you think. @Javierverbel @FloydZ @Memphisd~
Thanks for the suggestion @FloydZ. It has been applied in 6e648e2.
The last hidden comment was fixed with help of @Javierverbel . The culprit of the errors was just a bad mapping, and the other concerns were resolved.
This is ready for review. Review instructions are at the now updated top comment (PR description).
Issues
24 New issues
0 Accepted issues
Measures
0 Security Hotspots
69.9% Coverage on New Code
0.1% Duplication on New Code
Description
Review process
Pre-approval checklist