Closed perillo closed 4 months ago
Ops, I forgot to pull #433.
Woah, very nice, thanks a lot!
The parse_cobertura
tool was mainly used to try out the python module, so not really used as such. I'm fine with removing it.
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 65.68%. Comparing base (
c5a463d
) to head (3fe69ad
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
There are some problems github: Linux amd64 took a lot of time for installing the firefox snap.
I have a few questions about ci.yml
Why the scripts in .github/workflow
don't have the executable bit set (excluding ci-run-tests.sh)?
Currently ci.yml
has do to chmod +x
The tests are only run on Linux amd64 and FreeBSD.
On MacOS, the Prepare
step do
chmod u+x .github/workflows/osx-build.sh .github/workflows/ci-run-tests.sh
but the tests are not run. Is this intentional?
It seems that the CI use a release build of kcov. On my system I use a debug build.
Maybe this the reason there was a different behavior in the system_mode
test?
IMHO I think that the tests on Linux should be run with normal user privilege.
Woah, very nice, thanks a lot!
The
parse_cobertura
tool was mainly used to try out the python module, so not really used as such. I'm fine with removing it.
Originally I was thinking of adding a new test_cobertura.py
test, since I assumed that the __main__
code was a test.
For now I will keep parse_cobertura
; it may be removed in a later PR.
I have a few questions about
ci.yml
- Why the scripts in
.github/workflow
don't have the executable bit set (excluding ci-run-tests.sh)? Currentlyci.yml
has do tochmod +x
Not sure, but better to make them executable then and cleanup ci.yml. Also looks like it's done in multiple places.
- The tests are only run on Linux amd64 and FreeBSD. On MacOS, the
Prepare
step dochmod u+x .github/workflows/osx-build.sh .github/workflows/ci-run-tests.sh
but the tests are not run. Is this intentional?
They used to run, but there is an issue with code signing on MacOS: when running kcov the first time, you'll get a popup with something like "The application kcov wants to control other processes, allow?". You'll need to accept this to run kcov, so the tests on MacOS are now disabled in the ci run (since I can't click the popup...). I haven't investigated very well if it's possible to accept via the CLI somehow.
- It seems that the CI use a release build of kcov. On my system I use a debug build. Maybe this the reason there was a different behavior in the
system_mode
test? IMHO I think that the tests on Linux should be run with normal user privilege.
Maybe, but as I said before, the system mode tests are not really relevant since it's barely half-implemented anyway. Better to just disable them I think.
And yes, running as non-root is preferred. I don't remember if there was any particular reason to use sudo, but otherwise it can be turned off.
For code signing I found
XCODE_ATTRIBUTE_CODE_SIGN_IDENTITY ""
to cmake.Yes, the first one sounds like the way to go.
It needs to be signed though, since it uses non-standard "entitlements", i.e., acting as a debugger. Otherwise, the OS will not allow the ptrace etc calls.
Anyway, that's a separate issue to this!
These are additional features that I planned to implement, but I think it is better to implement them in the test driver (like https://gist.github.com/perillo/cebd9f82094471b241b961a4f869b0ed), instead.
What do you think about reimplementing it in Python and push to master, in tools/run-test
or tests/tools/run-test
?
Here are the additional features:
--count N
argumentThis is implemented in go test
. It will run the tests N time.
When also adding --failfast
, enables testing until failure mode.
The problem is that when also using --shuffle
it is not very useful because each run will use the same seed. Better to let the test driver do it.
--work
flagThis is implemented in go test
.
When --work
is set, it will print the temporary working directory before starting the tests, and will not clean the directory. When not set, it will remove the directory.
One problem is that in order to work, the temporary working directory MUST be created by the test runner. The solution is to remove the outdir
positional argument and create a temporary directory.
The second problem is that I would like to set the TMPDIR
environment variable to work_dir, but the easy way of using os.putenv
does not work, since it may cause memory leaks (see https://docs.python.org/3/library/os.html#os.putenv).
The third problem is that currently ci.yml
uses /tmp
and it is fine.
IMHO it is better if this is done by the test driver (the test driver is only used when testing locally).
I will implement these features in a separate PR, so this PR can be merged.
Yes, that sounds like neat features! Adding them in the run-test
script sounds fine to me.
Anyway, if you feel this PR is now ready, I'll gladly merge it
Yes, that sounds like neat features! Adding them in the
run-test
script sounds fine to me.
Do you prefer the tools/
directory (in the project root) or the tests/tools
directory?
Anyway, if you feel this PR is now ready, I'll gladly merge it
For me it is ready to be merged.
Thanks.
I think I'd prefer test/tools
As I am working on a kcov update for Fedora, I wanted to thank @perillo for his work on the test suite that I never had time to investigate. I'm done reviewing changes between the v42 and v43 tags and I'm almost positive that this pull request is the one that turns the table on my machine.
It went from almost nothing working to this on my Fedora 40 laptop:
======================================================================
UNEXPECTED SUCCESS: runTest (test_compiled.collect_no_source.runTest)
----------------------------------------------------------------------
Ran 91 tests in 129.627s
FAILED (skipped=8, expected failures=6, unexpected successes=1)
Running the test suite in an RPM build still fails, but much much less than before, when pretty much nothing used to pass:
Ran 89 tests in 101.262s
FAILED (failures=12, errors=2, skipped=7, expected failures=6, unexpected successes=1)
The remaining failures are probably caused by compilation and linking hardening flags set on Fedora but again, no time to investigate.
That's it, thanks again for your work on the test suite.
I concur! @perillo did really great work on the testsuite!
And thanks a lot for working on kcov in Fedora @dridi ! I use Fedora myself on my Linux-laptop, although I obviously use my own build of it :-)
@dridi and @SimonKagstrom, I'm glad that these changes are useful.
There are still a few changes I would like to implement:
doXX
functionsos.system
with subprocess
. All errors when calling system
are ignoreddoXX
functions.
This may help find hidden bugsUnfortunately these change may introduce bugs.
Refactor the tests
CHANGES
[x] tests: refactor the test runner
The current implementation, using the
unittest.main
function, is not working correctly with howkcov
tests are loaded. One annoying problem is that filtering tests is not possible.Rewrite the test runner, adding a new
libkcov
package.The code from "testbase.py" is moved to
libkcov.__init__
(the test API), and the code from "run-tests" is moved tolibkcov.main
(the test runner).In the test API, remove the
configure
method and the global variables, since the configuration paths are now processed byargparse
as positional arguments. These paths are validated and then used to initialize each test case, that now has an__init__
method.In order to make running the tests convenient, add a
__main__
entry point to thelibkcov
package.Rename the test names; "file.py" is renamed to "test_file.py", since this is the preferred naming conventions.
Tests are loaded by a custom test loader, not derived from
unittest.TestCase
, since the code is really simple.Move the "cobertura.py" module to the
libkcov
package, but without the main code. The main code has been moved to "parse_cobertura", so that it can be used as a tool.The new implementation make is easy do separate the test API, test runner, tests and tools.
In order to reduce the number of changes, in each test file
libkcov
is imported using astestbase
. It will updated in a separate commit.Update the "ci-run-tests.sh" script to run the tests with
python -m linbkcov
. It is necessary to set thePYTHONPATH
environment variable.The behavior of the new test runner implementation should be the same as the old one, with the difference that the order of tests is different. The new order is what users expect.
[x] tests: add support for test filtering
The command line support was already added in 74cdbaa (tests: refactor the test runner). Just ensure that a plain string pattern is converted to
fnmatch
syntax, as done by unittest.main.[x] tests: add the
--shuffle
option to the test runnerWhen enabled, randomize the execution order of tests, allowing support for detecting inter-test dependencies.
[x] tests: rename
testbuild
to binariesThe new name is more descriptive, like
sources
.[x] tests: rename
KcovTestCase
toTestCase
The Kcov prefix is redundant, since after this change the qualified name will be
libkcov.TestCase
.Use "libkcov" as the name when importing
libkcov
, instead of using testbase (used for avoiding additional renaming in 74cdbaa).[x] tests: add the
TestLoader.add_test_case
methodThe new method will make
TestLoader
more readable.Additional notes
tests: refactor the test runner
stat: 16 files changed, 251 insertions(+), 111 deletions(-)
Not sure if
parse_cobertura
works correctly, since I have not tested it @SimonKagstrom can you test it? Is it still used?I named the package
libkcov
. A better name iskcov
(orktest
?), butimport kcov
may cause confusion withself.kcov
in the test case.I plan to rename
libkcov.KcovTestCase
tolibkcov.TestCase
A renaming is already needed, for renamingimport libkcov as testbase
toimport libcov
.I plan to rename
testbuild
tobinaries
. This is not necessary, but IMHObinaries
seems a better name. Let me know if you agree.Originally I planned to move
cobertura
to a subpackage:but I changed idea, since I don't know if that code is still used, and
parse_cobertura
may be more convenient to use.tests: add the --shuffle option to the test runner
After running a the tests with the
--shuffle option
, I got two error (in two separate runs):I encountered the
test_bash_linux_only.bash_sh_shebang.runTest
failure a few time with the old test runner, usually when some other test failed.