Festo-se / cyclonedx-editor-validator

Tool for creating, modifying and validating CycloneDX SBOMs.
https://festo-se.github.io/cyclonedx-editor-validator/
GNU General Public License v3.0
18 stars 4 forks source link

Add integration tests #148

Closed mmarseu closed 1 month ago

mmarseu commented 5 months ago

We're already doing unit testing and there are some higher-level tests which test entire command modules (such as CommandIntegrationTestCase in https://github.com/Festo-se/cyclonedx-editor-validator/blob/035fab8753a9376a6b37f2e709dfe679aea34639/tests/test_amend.py#L32-L34

I'd like to add some proper integration tests - or call them acceptance tests if you prefer - which test even higher-level functionality. They would invoke the main() function itself and verify that the program interprets inputs correctly and produces the correct output.

The need for this came to me when I started thinking about #7. The redesign of the tool is so extensive that very little of our code and existing tests will remain untouched. But if we must change the tests, we can't be sure there are no regressions in the tests themselves.
So we need tests that don't change every time the tools internals get rewritten.

I'm thinking, we should have a set of test SBOMs (we can likely reuse much of what we already have for lower-level testing) and run it through a known good version of the tool to produce the expected output, which we also save to the repo. Then, our test functions are comparatively simple: Set command-line arguments, run main method, compare actual output against expected output. In practice, it's a little more involved but that's the essence.

It is important that these tests don't need to cover every single command-line option. We don't want to reproduce the entire test coverage that we already have in unit tests on another level of abstraction.
For example, for amend I believe a single test with an SBOM that triggers all operations will do just fine.

italvi commented 5 months ago

@CBeck-96 related to #74?

mmarseu commented 5 months ago

@CBeck-96 related to #74?

That is actually a very good fit, yes. These tests would be a great place to also verify validity of generated SBOMs, where applicable.

mmarseu commented 5 months ago

@CBeck-96 related to #74?

That is actually a very good fit, yes. These tests would be a great place to also verify validity of generated SBOMs, where applicable.

@italvi Upon closer thought, I don't think its adds any value after all. The tests run the commands on a known input and compare the result to a known output. Since the expected result is predetermined and the test can only succeed if the output matches exactly what is expected, additional validation would be pointless. The validation result would be the same as if we ran the validation directly on the expected output.