Open javiarrobas opened 4 years ago
Thanks for raising the point. It is a good one. I agree that testing the test case fmus as made available in the repo should be added, and that test_kpis
may not need to recompile test cases if it only reads in data.
But one thing I'm thinking should still be tested somehow is that the Modelica models for the test cases that are included in the repo should be tested to match what is made available in the associated fmus.
I agree that that is a necessary test. Actually such a test would have triggered an error with the issue that I was raising in my previous comment: https://github.com/ibpsa/project1-boptest/issues/183#issue-599447394. However, I think we don't get that check by just compiling the models and using the newly compiled model in the tests because that process does not use at any point the fmu as made available in the repo. A test I can think of is, for each provided test case:
All the other tests wouldn't have to compile the models but just use the available fmus with guarantees that the fmus represent what is in the Modelica model.
Yeup, I agree about the process you indicate.
Ok! I'll create a pull request for it. Maybe after https://github.com/ibpsa/project1-boptest/pull/169 is closed? just to don't mix up things.
Yes, that would be good I think. Also note: https://github.com/ibpsa/project1-boptest/issues/177.
@dhblum it rings a bell that we've discussed this already in the past but I think we might need to give a deeper though. It's about the compilation of the test cases that takes place in the beginning of most of the unit-tests. I think in some of them it makes sense like for
test_parser
ortest_data
as their functionality is involved in the compilation process. For the others I'm not sure if it's needed, or even if it's a good practice. Specifically fortest_kpis
I doubt that it's needed to compile the models for unit-testing as the tests are reading data from a deployed test case.An example of why it might not be a good practice arises in commit https://github.com/ibpsa/project1-boptest/commit/a9b9754595dde840914db2aa39f1ffbefa8d5cb2 where we forgot to update
wrapped.fmu
andwrapped.mo
that should had included the latest KPI tags with CO2 measurements. Still all tests were being passed because the original.mo
models did have the tags and these were compiled in every unit-test, resulting in the rightwrapped.fmu
that was used for the unit test but did not persist afterwards. This has been solved in https://github.com/ibpsa/project1-boptest/commit/5727f6b4a20c0b0d589502a37798397e95991b6c.I understand that unit-tests need to be independent from each other. Precisely for that I think we should avoid compilation of test cases in each of the unit-tests and rather accept that some unit-tests are built around a compiled
wrapped.fmu
that should include the latest features. What do you think?