MovingBlocks / TerasologyLauncher

Terasology Launcher is the official launcher for the open source game Terasology.
http://terasology.org/
Apache License 2.0
153 stars 76 forks source link

Present more detailed information from CI results #572

Open keturn opened 4 years ago

keturn commented 4 years ago

The current checks that run on pull requests (also known as validation actions, or continuous integration) produce output that looks like a teletype that's trying to save paper by not including stack traces for failing tests.

The output from these tools can be a lot easier to read, and much more informative, to help authors, peers, and mentors discover why something is not passing a test. Without needing to get to their own development machine and check out the branch to try to reproduce the results.

I'm not suggesting we reinvent the wheel or use anything cutting-edge, there's been plenty done in this field. We're using junit, which set the standards that a lot of other tools came to follow, so I hope that makes it easy to find something compatible.

This GitHub Actions forum thread on Publishing Test Results concludes there aren't good options for presenting this information in the limited interface available to the GitHub Action itself, but there are third-party integrations that are free for open source repositories.

There are a couple options that turned up when searching for something GitHub-Action-compatible and might be worth a closer look:

(This would be a more complete way to address #557)

keturn commented 4 years ago

It sounds like Cervator would like the opportunity to address the maintainability concerns that have come up with Jenkins, and offer Jenkins as a way of providing this sort of interface to build results. There's probably an issue for that in another repo or a card on a trello board somewhere?

keturn commented 4 years ago

Here's a more concrete example of what's missing from the current view, as compared to Jenkins:

test-runner-side-by-side z

(@jdrueckert, I hope this gives you the specifics you were asking for yesterday in discord)

skaldarnar commented 4 years ago

I'd like to mention Github Checks API in this context: https://developer.github.com/v3/checks/

There are Github Apps integrating with it to annotate the code with what went wrong (see https://github.com/marketplace/check-run-reporter) - maybe an even better option than a nested view in Jenkins?

keturn commented 4 years ago

Annotations do sound like a great feature, but the thread I linked earlier says they have some significant limitations that mean they can only provide part of the answer. In particular,

Annotations can only be displayed on changed files. If you have a file that was not changed but the test is failing for that file you cannot display an annotation there.

skaldarnar commented 4 years ago

Looking at Jabref I see that they set up different jobs for different tests in their pipeline - maybe that's something we could do as an intermediate step to get a bit more visibility on build and test results: https://github.com/JabRef/jabref/blob/master/.github/workflows/tests.yml

image

keturn commented 4 years ago

That'd be an improvement, yeah. If it presented separate jobs for each of the things check is doing now (checkstyle, test, and whatever else), it would make it easier to see which part failed and and the relevant output.

On the other hand, with the checks running as fast as they do, the overhead for each one being a job that has to set up its own runtime might be pretty big in comparison. :thinking:

I don't think that's a real blocker, anything we use that has this sort of dynamic worker allocation is going to be the same way. It'll still be plenty fast, assuming there's no shortage of workers in the pool. It's just a little resource-hungry. :truck: