helium / oracles

Oracles for Helium subDAOs
Apache License 2.0
17 stars 18 forks source link

Clippy Dead Test Code #804

Closed michaeldjeffrey closed 2 months ago

michaeldjeffrey commented 2 months ago

Reading about dead code problems in tests, multiple people pointed to this blog post.

The issue is, cargo compiles a single binary for each test suite. When you try to share code across suites, the checking is done at a per-suite level.

For example, you have:

If only one.rs uses shared_fn(), cargo test will complain that it's unused, because it's not used in all created binaries.

Moving all the test modules into a integrations/main.rs forces a single binary to be compiled, which has the same type of dead code checking as a regular application.


I also did some timing tests to make sure it didn't make anything slower. This was done running only the mobile_verifier workspace. time was used to get timing for cargo test, the built in timer was used for cargo nextest.

Running tests back to back, the first pass tended to be the faster by about 10 seconds. So take some of these numbers with a grain of salt. Also running on a 2.3 GHz 8-Core Intel Core i9.

time cargo test --quiet --package mobile-verifier -- --test-threads 1
time cargo test --quiet --package mobile-verifier -- --test-threads 1 --include-ignored 
time cargo test --quiet --package mobile-verifier
time cargo test --quiet --package mobile-verifier -- --include-ignored 
# ---
cargo nextest run -E 'package(mobile-verifier)' -j 1
cargo nextest run -E 'package(mobile-verifier)' -j 1 --run-ignored all 
cargo nextest run -E 'package(mobile-verifier)'
cargo nextest run -E 'package(mobile-verifier)' --run-ignored all 
| Command | Test Layout | Threads | test count | skipped count | time(sec) |
|---------+-------------+---------+------------+---------------+-----------|
| nextest | Single File |       1 |         48 |            18 |     36.10 |
| nextest | Multi File  |       1 |         48 |            18 |     44.35 |
|---------+-------------+---------+------------+---------------+-----------|
| nextest | Single File |       8 |         48 |            18 |     12.07 |
| nextest | Multi File  |       8 |         48 |            18 |     11.78 |
|---------+-------------+---------+------------+---------------+-----------|
| nextest | Single File |       1 |         66 |             0 |     74.28 |
| nextest | Multi File  |       1 |         66 |             0 |     94.65 |
|---------+-------------+---------+------------+---------------+-----------|
| nextest | Single File |       8 |         66 |             0 |     23.50 |
| nextest | Multi File  |       8 |         66 |             0 |     26.31 |
|---------+-------------+---------+------------+---------------+-----------|
| cargo   | Single File |       1 |         48 |            18 |     25.42 |
| cargo   | Multi File  |       1 |         48 |            18 |     34.69 |
|---------+-------------+---------+------------+---------------+-----------|
| cargo   | Single File |       8 |         48 |            18 |     13.69 |
| cargo   | Multi File  |       8 |         48 |            18 |     24.17 |
|---------+-------------+---------+------------+---------------+-----------|
| cargo   | Single File |       1 |         66 |             0 |     81.29 |
| cargo   | Multi File  |       1 |         66 |             0 |     74.17 |
|---------+-------------+---------+------------+---------------+-----------|
| cargo   | Single File |       8 |         66 |             0 |     25.45 |
| cargo   | Multi File  |       8 |         66 |             0 |     40.91 |

Thoughts

the reproducibility of the test timing is finicky at best, but the tests do not get slow to the point where I feel it outweighs the benefits we can get from changing to this testing structure.

The main benefit being more reliable code re-use in our tests, and not needing to mark #[allow(dead_code)] for shared functions that are not used in every test suite.