Open bcipriano opened 5 years ago
It looks like CII requirements don't specify a strict target coverage. Can we clarify the requirements for "most code paths"?
Yeah, good question -- like a lot of the CII "requirements" it's left intentionally vague, with the understanding that all projects are different and that the CII badge operates mostly on honor system when project owners are filling out the form.
I'll ask at the next ASWF CI working group meeting in a few days, to see what others have landed on.
Initial results are... Cuebot = ~50% coverage Python = ~24% coverage
The SonarCloud project has the results listed. For example to view a list of the biggest Python culprits:
https://sonarcloud.io/component_measures?id=bcipriano_OpenCue&metric=uncovered_lines&view=list
Thanks @bcipriano! It's great to see the report! I'm not surprised by the total lack of cuegui coverage, but this definitely shows the immediate need for improvement in rqd.
For sure. Here's the breakdown by component, pretty interesting:
Most of those aren't bad! Agreed with the priorities you listed. Cuesubmit is low but a relatively small codebase so I'm not too worried about that one.
CueGUI also has roughly 10x lines of code as the next nearest component, so that's dragging the numbers way down right now.
Looks like 14k of those lines are just in icons_rcc.py
though, so that number should rise pretty quickly.
Would it be possible to refresh that report, please? I am guessing there is a plan to run it periodically, but couldn't find if it is implemented yet. I am thinking about taking a stab at some tests, if I may. cuesubmit Validators.py seemed as a obvious low hanging fruit (and simple ;))
That would be amazing! It's a great way to learn about the code you're testing :)
For totally up to date results -- every commit to master gets analyzed and sent to SonarCloud:
https://sonarcloud.io/dashboard?id=AcademySoftwareFoundation_OpenCue_Cuebot https://sonarcloud.io/dashboard?id=AcademySoftwareFoundation_OpenCue
Current numbers are roughly Cuebot @ 51%, Python code @ 44%.
The "code" tab in there is super helpful in identifying areas of the code that aren't covered by tests.
Test coverage for the various OpenCue components is all over the place, but the average is likely poor. For developer confidence as well as CII requirements we need to improve this.
First step is to measure our current coverage across the board. Then we can identify the least-covered pieces and start to bring the average up.