DLR-SC / tigl

The TiGL Geometry Library to process aircraft geometries in pre-design.
https://dlr-sc.github.io/tigl/
Apache License 2.0
241 stars 61 forks source link

Unit / integration tests for python bindings. #826

Open rainman110 opened 3 years ago

rainman110 commented 3 years ago

We need test for the python bindings. As at least on of the CI jobs builds the bindings, we can test them easily.

MarAlder commented 5 days ago

From #1012:

Perhaps we should strive to also cover such Python functions via unittests?

I totally agree, but I wouldn't want to add it to this issue/PR-pair.

Any idea on a pragmatic way to achieve this? Here are some ramblings:

  • Regardless on how we implement the Python tests, having them in CI would be an improvement. Yep

  • When we implement the python tests, we could agree that we don't actually have to test the functionality of the swig-wrapped classes in the Python unit tests, but that it suffices to just the existence of the classes in the Python interface. This would limit the maintenance overhead of TiGL. It might be possible to achieve this with some degree of automation, not sure. Without automation, every contributor must add these checks themselves and the reviewer must manually verify that all relevant checks are in place, which adds a (small) burden to the maintenance.

Hmm, I was thinking in the direction that new high-level methods (what we define in CCPACS classes) are covered in separate unittest files (according to how it makes sense thematically, e.g. for ducts, fuelTanks, etc.). This should be somewhat consistent to the C++ unittests. As @rainman110 sais, it is then easy to trigger all tests via CI.

Automating it 🤔, always good. Not sure about the effort, though. However, it could ensure better coverage of the tests and prevent a patchwork of overlapping or missing tests according to my approach above.

  • We try to derive C++ and Python unit tests from a model-based testing approach. This would involve a lot of work to set the system up and the outcome is not clear: It could be an awesome new thing that boosts robustness or it could turn out to be too complex and impracticable, because it is really hard to formally model the process of modelling with TiGL/CPACS.

Fancy, but perhaps a little over the top? 😏

rainman110 commented 5 days ago

The problem is that we won't have python tests for files that we have not wrapped, since we have forgotten these classes to integrate.

Instead of having a whitelist of what to test, we could do it the other way around. From the cpacs codegen or the existing cpacs classes (header files), we can derive, which python classes need to exist. A black list could exclude classes that we don't want to test.

Then, a python test "simply" checks for all classes that need to exist and tries to instantiate or simply imports them, which should be sufficient for an existence test.

joergbrech commented 5 days ago

Instead of having a whitelist of what to test, we could do it the other way around. From the cpacs codegen or the existing cpacs classes (header files), we can derive, which python classes need to exist. A black list could exclude classes that we don't want to test.

Yes, that is what I meant with automation. This requires a bit of code analysis, e.g. identifying inputs and outputs of (member) functions, identifying std::vector<XXX> and std::unique_ptr<XXX> both through auto or typedefs. Maybe we could use libclang or something. The existing code generator could help, but only for the base classes from the generated namespace, not the custom high-level functions in the CCPACSXXX classes.

Fancy, but perhaps a little over the top? 😏

Yes, let's not start with that. :)