Closed Clemcarle closed 2 years ago
So this is quite mysterious, because it runs succesfully on my and Oliver's computers (Windows), but fails on Richel's and on github (Linux). Anyone has any idea what is going on?
This is odd indeed. I suspect this has to do with the RNG drawing different keys on different machines. Also the RNG itself might have changed since I wrote this test, so that the expected A is no longer drawn for either KAM. If so, I'll need to make a better test. I'll have a look sometime!
@Clemcarle and @TheoPannetier: I have taken a look at the code. It is not odd that different key maps have the same key; it is only checked if a keymap has different keys :-)
I am not sure about this, @richelbilderbeek : Making sure that different key maps can be generated will not have the same keys is the whole purpose of the get_n_random_kams
function. Do you believe it is not doing that ?
@Clemcarle you are right and I was wrong: I see on Linux that the key is in neither :-)
Ok, so my test is bad then: I assumed the same key would be drawn across all machines. I'll come up with a better test π
It is a fun failed test :-)
Merging #431 (7023a1c) into develop (a0a59d8) will increase coverage by
0.19%
. The diff coverage is100.00%
.
@@ Coverage Diff @@
## develop #431 +/- ##
===========================================
+ Coverage 92.86% 93.06% +0.19%
===========================================
Files 44 44
Lines 2144 2205 +61
Branches 120 130 +10
===========================================
+ Hits 1991 2052 +61
Misses 153 153
Impacted Files | Coverage Ξ | |
---|---|---|
key_action_map.h | 100.00% <ΓΈ> (ΓΈ) |
|
game_options.cpp | 100.00% <100.00%> (ΓΈ) |
|
key_action_map.cpp | 99.33% <100.00%> (+0.09%) |
:arrow_up: |
main.cpp | 95.58% <0.00%> (+3.92%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Ξ = absolute <relative> (impact)
,ΓΈ = not affected
,? = missing data
Powered by Codecov. Last update a0a59d8...7023a1c. Read the comment docs.
@richelbilderbeek what do you think of my new test for #353 ? It's ugly because we run the test over 1000 semi-random iterations, but I don't see any other way to ensure that the test will work for any OS
Hi @TheoPannetier, I like the test! Doing it once, however, is good enough here :-).
A hint about that is the number of tests, 1000, of which the number probably already felt arbitrary :-)
Issue #353 has been solved!