libcheck / check

A unit testing framework for C
GNU Lesser General Public License v2.1
1.07k stars 209 forks source link

Add JUnit XML support #337

Open trevershick opened 2 years ago

trevershick commented 2 years ago

The implementation was inspired by Glenn Washburn's original patches.

references - Original Patches - https://sourceforge.net/p/check/mailman/message/2963561/ Issue #334

trevershick commented 2 years ago

My CircleCI job for a pet project is using the patched version of check based on these changes. You can see the results files on any given build in https://app.circleci.com/pipelines/github/trevershick/wpp under the tests tab ( a recent build: https://app.circleci.com/pipelines/github/trevershick/wpp/48/workflows/8d0d5d1a-8fc2-4eac-95aa-bae3f84435a5/jobs/48/tests ). I can't guarantee that build will be there when you look though.

Screen Shot 2021-09-24 at 9 13 27 PM
trevershick commented 2 years ago

@mikkoi Any tips for getting a maintainer to run the workflows? I'd like to contribute more.

brarcher commented 2 years ago

Hm. I did not realize that the tests would not run automatically for first-time collaborators. That must be a recent change: https://github.blog/changelog/2021-04-22-github-actions-maintainers-must-approve-first-time-contributor-workflow-runs/

On another PR I saw this week there was a "run workflow" button I could click to start the workflows. However, on this PR there is no such button.

Does the branch need to be up-to-date with the master branch to have the option to run the workflow? The docs do not mention this as a requirement:

https://docs.github.com/en/actions/managing-workflow-runs/approving-workflow-runs-from-public-forks

trevershick commented 2 years ago

Thanks for looking at this @brarcher . I've brought the PR up to date with master, perhaps this will allow the workflows to be run.

brarcher commented 2 years ago

Ah, that was is, thanks. The test workflows did run, though a number of them hit test failures. The tests are passing on the master branch, so I don't believe the test failures are pre-existing.

trevershick commented 2 years ago

Got a linux box up and going - got a clean test run:

$ CTEST_OUTPUT_ON_FAILURE=1 ninja -C build test
ninja: Entering directory `build'
[0/1] Running tests...
Test project /home/trevershick/workspace/check/build
    Start 1: check_check
1/9 Test #1: check_check ......................   Passed  178.07 sec
    Start 2: check_check_export
2/9 Test #2: check_check_export ...............   Passed  176.53 sec
    Start 3: test_output.sh
3/9 Test #3: test_output.sh ...................   Passed    0.11 sec
    Start 4: test_log_output.sh
4/9 Test #4: test_log_output.sh ...............   Passed    0.03 sec
    Start 5: test_xml_output.sh
5/9 Test #5: test_xml_output.sh ...............   Passed    0.09 sec
    Start 6: test_tap_output.sh
6/9 Test #6: test_tap_output.sh ...............   Passed    0.03 sec
    Start 7: test_check_nofork.sh
7/9 Test #7: test_check_nofork.sh .............   Passed    0.01 sec
    Start 8: test_check_nofork_teardown.sh
8/9 Test #8: test_check_nofork_teardown.sh ....   Passed    0.01 sec
    Start 9: test_set_max_msg_size.sh
9/9 Test #9: test_set_max_msg_size.sh .........   Passed    0.02 sec

100% tests passed, 0 tests failed out of 9

Total Test time (real) = 354.89 sec
trevershick commented 2 years ago

I just noticed this is still open. I believe I addressed most comments. There are a couple where I attempted to explain my reasoning.