Open emanlove opened 1 year ago
A repeatable, data-driven win would be running the full test suite under coverage... in addition to getting (and ideally, setting a baseline) for coverage of SeleniumLibrary
itself, an augmented run could then (or perhaps, only) include selenium
.
While pytest-cov
is handy, given the extra complexity of the (u|a)test.py
scripts, it likely makes the most sense to use coverage run -m utest.run
(adding a __init__.py
to both directories so they quack like modules). The rest of the details can be configured in pyproject.toml
, and special cases like the selenium
excursion can be configured by specifying an alternate config file via environment variable.
coverage html
can definitely merge multiple .coverage
files, even across platforms and architectures... though usually not multiple versions of the system-under-test.
This would provide both a top-level percentage, by version-under-test, as well as line-by-line information about which selenium.*
functions are not excited by SeleniumLibrary
's test suite... which would be made more accurate as coverage was observed and improved.
I can work up a strawman PR, if there's interest.
I made coverage implementation in Browser library side and could get it working also with RF. But I couldn’t get it working, if pabot was used to run the RF tests
@bollwyvl If it is not too much effort I would be interested in that implementation.
pabot was used to run the RF tests
I think (but have not tried, and will, on my downstream) that the CLI invocation of pabot
(and probably the python one, haven't looked) would allow using something like:
pabot --command coverage run --parallel-mode -m robot --end-command
Probably would be painful if using a distributed pabot
setup.
The worst case scenario would be doing it The Hard Way, with e.g. some environment variables that start coverage with the python API... this has other benefits, like the aforementioned possibility of per-robot-test contexts, but that sounds like something that would warrant a dedicated CoverageLibrary
.
not too much effort
I am hoping to try to mentor one of my &{DAY_JOBS_COLLEAGUES}
in actually doing the PR... we run hundreds of RFSL tests a day, and it would help folk to not just think of it as :magic_wand: (or when things don't work, :imp:) that comes from free software sky daddy by getting their hands a little dirty.
I can happily report that the pabot --command coverage run ... -m pabot --end-command
technique works just fine, and already handles much of the complexity of naming things by adding a timestamp and machine identifier. In my case, i just emit it the .coverage*
files in a top-level folder (which must exist before starting!), again breaking the model for distributed pabot
.
My downstream wrapper, with a relatively modest test suite, lights up a a mere 45% of selenium
, 41% of SeleniumLibrary
and (shamefully) only 62% of its own code :blush: .
I'll do a bit more digging to see if a listener API implementation can provide the suite/test context information, for starters.
Moving beyond coverage of python source: might we extract information about the current test back to the keyword/test level to provide line-level coverage for .robot
and .resource
files? This would actually be a huge boon for large test suites, which inevitably end up with big chunks of dead test/task code, and is at least theoretically possible a la coverage-jinja-plugin.
An example, should see that we don't use the pin_script
method on selenium.webdriver.remote.webdriver
.
As part of the v6.2.0 milestone and beyond which is starting to add Selenium 4 capabilities I was just curious as to how many Selenium Python methods are used within the SeleniumLibrary and maybe what functionality we might be missing. I was thinking that rough comparison of methods and then the usage within the SeleniumLibrary code might show this difference of available versus implemented.
Thinking this would consist of two tasks