Arranging tests to have the correct platform parameter state is essential for testing app behavior in cases that are potentially very difficult to test manually (particularly because one needs to manually set up a platform parameter module, override all platform parameters needed by downstream dependencies, and then change the current parameter before injection).
4204 lays the foundation for much better approach: one module is shared for all tests and can provide case-by-case overriding for each parameter. While this makes the situation much better, it still has the drawback of not technically setting up the parameters in parity with production. Production versions of the app guarantee that a parameter cannot change for the lifetime of the app instance, whereas post-injection parameter overriding does allow for this possibility. This can introduce state inconsistencies that at best lead to bugs in tests and at worst a false sense of correctness when tests that rely on such behavior pass.
I think the best path forward is to build upon the foundation from #4204 by:
Introducing a JUnit rule & annotation(s) for overriding platform parameters (see below for some specific thoughts on this). It's expected that this mechanism be built such that the production parameter module changes its behavior (probably by swapping out the provider for the parameters with a test overridable mechanism).
Updating OppiaTestRule to automatically incorporate the above rule.
Updating all tests to use the production module instead of the test or a custom module, and adding OppiaTestRule/test annotations where needed to ensure overrides are still happening.
Removing the test platform parameter module and ensuring that the production module properly scopes parameters to be singletons.
Maybe: Introducing additional JUnit functionality to help with testing.
It's possible to also directly build on the new functionality introduced in #4204 by instead overriding values in the test module via a JUnit annotation/rule, and use the test module everywhere. However, we should try to minimize test-specific modules & dependencies where possible as this introduces better parity with production (by reducing test-specific behaviors that could diverge from production behavior). Note also that both cases will help with the hilt migration (#1720) since it normalizes one module for all tests.
Re: (1), something like the following syntax might be nice to implement:
@OverrideStringParameter(name = "some_str_param", value = "new_value")
@OverrideBoolParameter(name = "enable_my_feature", value = true)
fun testSomething_whenSomething_isSomething() {
// Test dependencies will only ever see the overridden parameter value, and the test itself can inject it.
}
@OverrideBoolParameter(name = "enable_my_feature", value = true)
fun test...
(And ditto for the other supported types). The corresponding rule would be the actual mechanism that pulls these annotations and overrides the values before running the test (and restoring them back afterwards). I'd also expect the annotation to work for both test methods & test classes (where the latter affects every test in the suite and keeps consistent parameter values for all). The annotation will need to be repeated (which means we might run into an issue with Kotlin where repeated annotations fail to build--upgrading to Kotlin 1.6 may fix this and this is currently in progress in #4261). We also probably need to make sure that people don't accidentally forget the rule by introducing a file content regex check that requires all tests to specify OppiaTestRule as an explicit @Rule (though this might need to be a custom script check if it can't be done via regex).
Re: (5), there are two bits of advanced functionality that can be exceptionally useful:
a. Enumerating many different scenarios of non-feature platform parameters (useful, for example, for testing tuning parameters).
b. Running a test for all combinations of a set of feature parameter values (useful for broad regression testing, especially between features that might have interesting overlapping behaviors in different enable states).
(a) Can probably be implemented by building special iteration types into the app's existing parameterized test runner to override parameters (though thought is needed for how these combine with non-parameter iterations, and how the rule hook-up bits need to work to ensure the override actually happens). Some verification enforcement will be needed to ensure real parameters are being referenced.
(b) Can probably be implemented in a similar way to (a), but with a different custom iteration type (maybe one where the author provides the names of all boolean parameters they want to permute). Some verification enforcement will be needed to ensure actual boolean parameters are referenced.
The benefit of leveraging the existing parameterization framework for (a) and (b) is that it already handles the tricky part of creating unique test cases for each parameterized scenario, and properly re-setting up the test across each parameter configuration without leaking state (in a cross-runner and test environment way).
Related issue: #4303 (as it might share some of the production-side implementation), and either issue can be implemented (there's no specific expected order between this & that one).
Arranging tests to have the correct platform parameter state is essential for testing app behavior in cases that are potentially very difficult to test manually (particularly because one needs to manually set up a platform parameter module, override all platform parameters needed by downstream dependencies, and then change the current parameter before injection).
4204 lays the foundation for much better approach: one module is shared for all tests and can provide case-by-case overriding for each parameter. While this makes the situation much better, it still has the drawback of not technically setting up the parameters in parity with production. Production versions of the app guarantee that a parameter cannot change for the lifetime of the app instance, whereas post-injection parameter overriding does allow for this possibility. This can introduce state inconsistencies that at best lead to bugs in tests and at worst a false sense of correctness when tests that rely on such behavior pass.
I think the best path forward is to build upon the foundation from #4204 by:
It's possible to also directly build on the new functionality introduced in #4204 by instead overriding values in the test module via a JUnit annotation/rule, and use the test module everywhere. However, we should try to minimize test-specific modules & dependencies where possible as this introduces better parity with production (by reducing test-specific behaviors that could diverge from production behavior). Note also that both cases will help with the hilt migration (#1720) since it normalizes one module for all tests.
Re: (1), something like the following syntax might be nice to implement:
(And ditto for the other supported types). The corresponding rule would be the actual mechanism that pulls these annotations and overrides the values before running the test (and restoring them back afterwards). I'd also expect the annotation to work for both test methods & test classes (where the latter affects every test in the suite and keeps consistent parameter values for all). The annotation will need to be repeated (which means we might run into an issue with Kotlin where repeated annotations fail to build--upgrading to Kotlin 1.6 may fix this and this is currently in progress in #4261). We also probably need to make sure that people don't accidentally forget the rule by introducing a file content regex check that requires all tests to specify OppiaTestRule as an explicit
@Rule
(though this might need to be a custom script check if it can't be done via regex).Re: (5), there are two bits of advanced functionality that can be exceptionally useful: a. Enumerating many different scenarios of non-feature platform parameters (useful, for example, for testing tuning parameters). b. Running a test for all combinations of a set of feature parameter values (useful for broad regression testing, especially between features that might have interesting overlapping behaviors in different enable states).
(a) Can probably be implemented by building special iteration types into the app's existing parameterized test runner to override parameters (though thought is needed for how these combine with non-parameter iterations, and how the rule hook-up bits need to work to ensure the override actually happens). Some verification enforcement will be needed to ensure real parameters are being referenced.
(b) Can probably be implemented in a similar way to (a), but with a different custom iteration type (maybe one where the author provides the names of all boolean parameters they want to permute). Some verification enforcement will be needed to ensure actual boolean parameters are referenced.
The benefit of leveraging the existing parameterization framework for (a) and (b) is that it already handles the tricky part of creating unique test cases for each parameterized scenario, and properly re-setting up the test across each parameter configuration without leaking state (in a cross-runner and test environment way).