rust-random / getrandom

A small cross-platform library for retrieving random data from (operating) system source
Apache License 2.0
278 stars 181 forks source link

Code test coverage should be measured and reported in CI #288

Open briansmith opened 2 years ago

briansmith commented 2 years ago

There is a lot of runtime feature detection and other conditional logic in this crate. AFAICT, when tests are run, it is arbitrary which implementation gets picked. For example, for Linux, AFAICT only the getrandom syscall implementation is tested, and the file I/O fallback is not tested. Publishing the code test coverage report would make it clear which code isn't being tested on which platforms.

There is a lot of code that is copy-modified-pasted. This is understandable because some targets have slightly different APIs. My hope is that when code test coverage measurement is published, we'll see clearly which duplicated coding patterns we should factor out to increase the code coverage further to minimize the amount of uncovered code for difficult-to-test (lacking test runners) platforms.

Also I expect having code test coverage will facilitate more exhaustive testing, such as writing tests that exercsise both the getrandom syscall branch and the File I/O, e.g. by using ptrace or equivalent, similar to what BoringSSL does.

newpavlov commented 1 year ago

Are you proposing to measure coverage on a per-target basis? I am not sure if it will be possible to accumulate coverage data for supported targets which run in separate CI jobs. Also, we can not run tests in CI for all supported targets the first place.

As for measuring code coverage, tarpaulin is a quite convenient tool for that.

josephlr commented 1 year ago

I think that ideally we would collect coverage metrics from a bunch of different runs on a bunch of different targets. Then we would have a way to "merge" all this coverage data. Given the limitations with some of our targets, certainly some files wouldn't be covered, but we would be able to see which lines of code are being hit by some of our tests.

briansmith commented 1 year ago

Are you proposing to measure coverage on a per-target basis? I am not sure if it will be possible to accumulate coverage data for supported targets which run in separate CI jobs.

In the ring CI we do collect code test coverage for multiple targets. We send it to codecov.io and then codecov.io merges it all for us automatically.

josephlr commented 4 months ago

It might be a good idea to also incorporate branch coverage (in addition to line coverage) to make sure we are hitting alternative code paths: https://github.com/taiki-e/cargo-llvm-cov/issues/8