Open ozgurakgun opened 4 days ago
In PR #421, tests are incorrectly passing even though there are mismatches between Conjure solutions and Conjure-Oxide solutions for some models. This issue arises because when running tests with ACCEPT=true
(i.e., ACCEPT=true cargo test
), the expected solution files are automatically rewritten with the generated solutions, even if they don’t match Conjure solutions. As a result, the next time the tests run, they always pass since the test framework compares the newly generated solutions with the updated expected files (which are identical).
To resolve this, the key is to avoid overwriting the expected files when the ACCEPT=true
flag is set, unless the generated solutions match the Conjure solutions.
When ACCEPT=true
:
save_model_json
and save_minion_solutions_json
).When ACCEPT=false
:
The integration_test_inner
function will be refactored to:
ACCEPT
flag.ACCEPT=false
, assert the generated files against the expected solutions.ACCEPT=true
, assert that the generated solutions match the Conjure solutions. If they match, rewrite the expected files.@ozgurakgun Looks good?
cargo test ACCEPT=true
does this work? I thougth ACCEPT had to go first.*generated*
files to *expected*
. simpler logic, easier to extend in the future.I stopped reading at "implementation details".
reads a bit like gpt (mostly correct, well structured, but slightly incorrect in a few important ways), which is not a problem per se, but make sure you agree with what it says. I would just remove the "Implementation Details:" and "Key Change:" sections as they are repetitious.
You can start by writing a description @YehorBoiar :)