ApexAI / apex_rostest

Framework for ROS2 Integration Testing
5 stars 6 forks source link

Give special treament to test processes #21

Open pbaughman opened 5 years ago

pbaughman commented 5 years ago

Taken from here: https://github.com/ApexAI/apex_rostest/issues/7

@hidmic said It'd be nice for exit code assertions to be implicit in certain cases. Having to assert that e.g. a GTest action exits with a non-zero code is a bit redundant.

Description

If launch adds test actions or we add test actions like "run pytest" or "run gtest" as described here: https://github.com/ros2/launch/pull/178 we could have apex_launchtest automatically check that these processes exit with code 0. These pytest or gtest launch actions would be run in another process, could generate junit XML, and would affect the final exit code

Desired special treament

Some open questions about how this works:

If one of these test actions fails and apex_launchtest exits with an exit code, how does it indicate what exactly failed? Right now the tests from the test.py file get their results printed to the console and junit XML is generated. Would we combine the launch action results with the test.py results

How do these test actions get their XML combined together? Do they need to be combined together? Who checks that they successfully generated XML? Right now ament_run_test checks that apex_launchtest generated XML because it knows the name of the XML to expect. Would apex_launchtest need to check that the test actions successfully generated XML?

Language: We should probably come up with a specific name for the tests found in the name_of_test.test.py file to distinguish them from the tests contained in launch actions so we don't get confused in discussions

hidmic commented 5 years ago

Would we combine the launch action results with the test.py results

Even though rostest used to just dump all files in a directory, I think it'd be best to combine them (see the <testsuites> tag in jUnit XML format). The tool could then react accordingly to tests that fail to generate output.

Intuitively, I think of this as having a fixture and a collection of tests running against it. Where do those tests run or how are those tests implemented should not make a difference.