Open abitrolly opened 2 months ago
Hi @abitrolly !
Sorry for not getting to this sooner. You are touching multiple coupled (and relevant) points here so I'll try to split this into multiple (hopefully doable) tasks:
Sadly, we can't avoid the extra click to get from the pull request to the Check Run UI (this is a design of GitHub). But, we can use this markdown view to provide more information, but, this is a GitHub-specific feature and we try to share as much as possible => not saying we can't put more info there but we don't want to put all the info from the dashboard into this markdown.
@mfocko mentioned we might be able to provide GitHub with a standardised form of test results. (We need to check this.) Also, we can try to get this from TF and use it either in a status name (a little space) or in GitHub Check Run Markdown (more space, but GitHub specific as mentioned above) and/or in the Packit's dashboard.
Definitely can be improved -- I can see a few small changes that might help here:
Can definitely be improved, but is more on the TF developers. This specific occurrence of "failure" is very confusing. For me as well...;) We can ask on their issue tracker to improve this.
One thing @Venefilyn is trying to achieve is to have one shared dashboard for all the related tools (basically to have one dashboard both for Packit and TF). This is still not clear how to approach (and sustainably manage) but something we are thinking about.
I hope I haven't forgotten about any crucial issue -- please, let me know what you think about these and we can create a separate task for each item. Thanks for providing us with the whole story of you going through this! This is really helpful since we are a bit biased.
Thanks for the reply. The "first experience" UI/UX issues are still actual. Now that I've got some answers, I've also a bit biased, so let me concentrate on the problem I am trying to solve right now.
- TF result view
Can definitely be improved, but is more on the TF developers. This specific occurrence of "failure" is very confusing. For me as well...;) We can ask on their issue tracker to improve this.
I asked here https://gitlab.com/testing-farm/oculus/-/issues/24 but the Web UI only parses results.xml
(https://gitlab.com/testing-farm/oculus/-/merge_requests/65/diffs) that is provided by TMT (I guess), and the results.xml
from TMT just doesn't provide sufficient details.
In https://github.com/teemtee/tmt/pull/3039#issuecomment-2221169944 we've traced the incomplete results.xml
to beakerlib
test result processor. There are still some missing pieces connecting results.xml
to JUnit format. Web UI lists both, but it is not clear if it uses both.
One thing @Venefilyn is trying to achieve is to have one shared dashboard for all the related tools (basically to have one dashboard both for Packit and TF). This is still not clear how to approach (and sustainably manage) but something we are thinking about.
I like static web app approach that TF Web UI (https://gitlab.com/testing-farm/oculus/-/merge_requests/64/diffs) is using.
If the results.xml
format is documented and well engineered, people could use that to also render GitLab etc. Maybe it is impossible to have a perfect dashboard for everything, but it definitely possible to make a dashboard that could be customized for specific workflow.
In conclusion I must absolutely add that
graph LR
this --> needs --> diagrams
Thanks for all the info, @abitrolly ! And thanks for your work on the related tools -- this shows quite well how is the current state a bit misleading to the user when trying to get to the responsible service...;)
Just a small update that we are starting a small research about the shared dashboard. (How do people want to use it, and what information do they need.) I am linking this issue since it has a couple of interesting points. If you are interested, we can even include you in our interview round. (But issue(s) works fine as well, so no push...;)
Description
When build fails, there is are no error, only logs. GitHub and GitLab are able to parse error logs, and provide info with links that directly show what happened. With packit even reaching the logs is like 4 clicks away and then scrolling scrolling scrolling.
For example, this test failure in
tmt
.:sob:
WHAT???
fail
with(1 passed, 0 failed, 0 error)
WHAT???
Now the only way to find out the error is to read through all the logs in these tiny scrolls.
Benefit
Importance
Very important.
Workaround
Participation