ibpsa / project1-boptest

Building Optimization Performance Tests
Other
109 stars 70 forks source link

Compilation of test cases in unit-tests #183

Open javiarrobas opened 4 years ago

javiarrobas commented 4 years ago

@dhblum it rings a bell that we've discussed this already in the past but I think we might need to give a deeper though. It's about the compilation of the test cases that takes place in the beginning of most of the unit-tests. I think in some of them it makes sense like for test_parser or test_data as their functionality is involved in the compilation process. For the others I'm not sure if it's needed, or even if it's a good practice. Specifically for test_kpis I doubt that it's needed to compile the models for unit-testing as the tests are reading data from a deployed test case.

An example of why it might not be a good practice arises in commit https://github.com/ibpsa/project1-boptest/commit/a9b9754595dde840914db2aa39f1ffbefa8d5cb2 where we forgot to update wrapped.fmu and wrapped.mo that should had included the latest KPI tags with CO2 measurements. Still all tests were being passed because the original .mo models did have the tags and these were compiled in every unit-test, resulting in the right wrapped.fmu that was used for the unit test but did not persist afterwards. This has been solved in https://github.com/ibpsa/project1-boptest/commit/5727f6b4a20c0b0d589502a37798397e95991b6c.

I understand that unit-tests need to be independent from each other. Precisely for that I think we should avoid compilation of test cases in each of the unit-tests and rather accept that some unit-tests are built around a compiled wrapped.fmu that should include the latest features. What do you think?

dhblum commented 4 years ago

Thanks for raising the point. It is a good one. I agree that testing the test case fmus as made available in the repo should be added, and that test_kpis may not need to recompile test cases if it only reads in data.

But one thing I'm thinking should still be tested somehow is that the Modelica models for the test cases that are included in the repo should be tested to match what is made available in the associated fmus.

javiarrobas commented 4 years ago

I agree that that is a necessary test. Actually such a test would have triggered an error with the issue that I was raising in my previous comment: https://github.com/ibpsa/project1-boptest/issues/183#issue-599447394. However, I think we don't get that check by just compiling the models and using the newly compiled model in the tests because that process does not use at any point the fmu as made available in the repo. A test I can think of is, for each provided test case:

  1. Compile the Modelica model of the repo into fmu_test (this is what we are actually doing in the beginning of each test).
  2. Load fmu_test
  3. Load the fmu available in the repo as fmu_repo
  4. Assert that (some of) the simulation result trajectories of fmu_test and fmu_repo coincide

All the other tests wouldn't have to compile the models but just use the available fmus with guarantees that the fmus represent what is in the Modelica model.

dhblum commented 4 years ago

Yeup, I agree about the process you indicate.

javiarrobas commented 4 years ago

Ok! I'll create a pull request for it. Maybe after https://github.com/ibpsa/project1-boptest/pull/169 is closed? just to don't mix up things.

dhblum commented 4 years ago

Yes, that would be good I think. Also note: https://github.com/ibpsa/project1-boptest/issues/177.