askimed / nf-test

Simple test framework for Nextflow pipelines
https://www.nf-test.com
MIT License
131 stars 17 forks source link

Run nf-test in stub mode #227

Open adamrtalbot opened 3 weeks ago

adamrtalbot commented 3 weeks ago

Currently, we have to write an additional test with stub enabled with separate assertions and parameters to use stubs. However, we would generally like to run nf-test with the stubs as close to the real thing as possible and a separate test creates a divergence between the main test and the stub test. Ideally, we would run the stub test prior to the 'real' test to make sure it makes sense before spending time on the main test.

Instead of writing a separate test, nf-test could have a native stub mode which allows us to reuse the same test but with lower stringency. This would enable us to run nf-test in a 'pre-run' mode and use the same code and logic as the main nf-test.

Suggested implemented:

  1. User writes a process/workflow/pipeline and associated nf-test. It includes assertions etc/.
  2. The user runs nf-test test -stub which runs Nextflow in stub mode
  3. nf-test proceeds as normal using the CLI option -stub.
  4. When it reaches the assertions, nf-test disregards the normal assertions:
    • Option 1: Ignore all assertions except success or failure
    • Option 2: Option 1 plus check for file existence in snapshots
    • Option 3: Author writes an additional optional stub assertion block into the nf-test which is used isntead of the main assertion.
maxulysse commented 3 weeks ago

I like that idea very much. I've been working on https://github.com/nf-core/rnaseq/pull/1335 In which I checked/fixed/implemented stub and stub tests for lots of modules and subworkflows. And I found it was quite cumbersome to just rewrite the test with stub. For all of these what I was doing was just copy-paste the regular test, adding - stub in the name, adding options "-stub", and snapshotting process.out or workflow.out. So that feels like a lot of code duplication for not much.

lukfor commented 3 weeks ago

Thanks, that definitely makes sense to save time and avoid duplicated code. I am currently busy with teaching (end of semester) and trying to release 0.9.0 really soon. But afterwards, I'll think about how we could implement Adam's ideas.

edmundmiller commented 3 weeks ago

What if there were just macros?

https://github.com/avajs/ava/blob/main/docs/01-writing-tests.md#reusing-test-logic-through-macros

Or this might be some wizardry but what about Data tables support?

https://spockframework.org/spock/docs/2.3/data_driven_testing.html#data-tables

nvnieuwk commented 3 weeks ago

It would also be nice if an option would exist that allows the user to specify which tests should be run in stub and normal mode. This way we can start testing all tests in stub mode very easily without having to rewrite a new test for each test variation. (I'm thinking something along the lines of a new option called testMode that could take a list of test modes => there's probably a better way to do this, but I'm just spitballing here :grin: )

adamrtalbot commented 3 weeks ago

What if there were just macros?

https://github.com/avajs/ava/blob/main/docs/01-writing-tests.md#reusing-test-logic-through-macros

Or this might be some wizardry but what about Data tables support?

https://spockframework.org/spock/docs/2.3/data_driven_testing.html#data-tables

These are both nice but I think they're a good way of testing a matrix of inputs rather than for stubs. I think stubs should be baked into the normal tests, so a Spock equivalent might be:

when:
include { process_a } from 'process_a'

then:

expect:
process_a(INPUT) == EXPECTED

where:
INPUT | EXPECTED
1     |   1
and:
2     |   2
4     |   2

stub:
process_a.success
process_a.out.output[0].name == "touch.txt"

This is basically writing a separate set of assertions into the main test for stubs.

It would also be nice if an option would exist that allows the user to specify which tests should be run in stub and normal mode. This way we can start testing all tests in stub mode very easily without having to rewrite a new test for each test variation. (I'm thinking something along the lines of a new option called testMode that could take a list of test modes => there's probably a better way to do this, but I'm just spitballing here 😁 )

I think we're agreed - if we can, we should write stubs into the main body of the test rather than as a separate test. I guess the question is how.

lukfor commented 3 weeks ago

Writing a separate stub body sounds like a good approach (especially it is similar to nextflow itself). When a stub body is present, we start the test with the stub flag first and check only the assertions within the stub body. Next, we can run it in normal mode, checking the regular assertions.

Based on this, we can implement different strategies, such as skipping the regular test if the stub fails, etc.