Closed HenriBeck closed 5 years ago
Good idea, I agree that the current situation is not optimal / discoverable and only leads to test copy & paste to get the boilerplate right.
Note: The order of expectations matter. Assuming we have a saga which first calls a function, then dispatches an action and the calls another function.
π
I am wondering whether toRun/Call/Spawn
would need the ability to return test values (what's currently done with .receiving()
), as there are probably always some side-effects that are harder to mock (even at the cost of making the test less practically useful).
I think we should refine the return types a little further, just to make sure (on the type-level) that methods are called in the same order, and only the desired amount of times (e.g. once for toHaveState
). (This means not always returning SagaTest<_>
, but some intermediate types until the final run
.)
Do you think we need the ability to run multiple sagas at the same time? I think that is generally where we tend to mock to get the desired side-effects (in the OMS case we'd mock yield call
s because the other saga didn't have the state available, but that would be obsolete with out new approach to always using the full app state. But the problem would still persists for dispatch & take
scenarios, or generally dispatches
which don't run the related sagas and hence don't influence the final store in the test, even though they would do so in a running application).
I think we should refine the return types a little further, to make sure (on the type-level) that methods are called in the same order, and only the desired amount of times (e.g., once for toHaveState). (This means not always returning SagaTest<_>, but some intermediate types until the final run.)
π
I am wondering whether toRun/Call/Spawn would need the ability to return test values (what's currently done with .receiving()), as there are probably always some side-effects that are harder to mock (even at the cost of making the test less practically useful).
We need the possibility, especially on the App for sagas which might call some native code. I believe in general it would be best to mock out API responses and not mock call/spawn/run
effects as this would be closer to actual real-life behavior. We also currently mock any calls
etc. in the App to not worry about those dependencies as this could potentially create a long list of mocks and imports.
Do you think we need the ability to run multiple sagas at the same time? I think that is generally where we tend to mock to get the desired side-effects (in the OMS case we'd mock yield calls because the other saga didn't have the state available, but that would be obsolete with out new approach to always using the full app state. But the problem would still persists for dispatch & take scenarios, or generally dispatches which don't run the related sagas and hence don't influence the final store in the test, even though they would do so in a running application).
Let's design the test API to be independent so that tests can run in parallel. So every test would create it's store and set up the required middleware even.
withCall<Args extends any[], Return>(
fn: (...args: Args) => Return,
value: Return | ((...args: Args) => Return),
): SagaTest<State, Payload>;
I guess the idea here is to have a function to create the value dynamically, right? I think that would be nice, though from OMS side I can't think of any specific case where the same call is executed many times. Do you have an example for this (e.g. call
in a loop)?
The only problem I see from a type-level is, that we can't discern what to do with value
as a call
might also return a function (but with different parameters than its own), so then it wouldn't be save to invoke any parameters being typeof value === 'function'
.
toCall
, toSpawn
, and toRun
are just meant as assertions, with the underlying operation still being executed, right?
I guess apart from these assertions (which should be fairly easy to add), all of the functionality is already available through testSagaWithState
. So this would end up being a nice API (including hopefully better documentation) on top of it :)
I am wondering whether we should make the .receiving()
or withCall
in the above parlor more explicit to expose it as a "mock*" function, as that's what it is.
Especially for select
mocking these became completely obsolete with the full state testing approach taken in OMS, which I deem a very good thing.
For the final state comparison we might want to provide a lense
though to only look at parts of the state. While that can be dangerous (ignoring side-effects on other stores) it will on the plus side help to keep the test focused, such that changing to e.g. the tracking state don't affect nearly all user-interaction tests.
@HenriBeck I have added some type example for an API like above in https://github.com/tp/tsaga/commit/06306db8ae8c9c5227626465d5833cfb27bb80fd
Let's hash out the details later today, especially I am not sure about the interaction between withCall
/provide
and toCall
.
My current think would be that withCall
is an assertion + value provider, so maybe it would be clearer to not separate them but find a single API (like an optional receiving
parameter ?)
I guess the idea here is to have a function to create the value dynamically, right? I think that would be nice, though from OMS side I can't think of any specific case where the same call is executed many times. Do you have an example for this (e.g. call in a loop)?
Yeah, so instead of us making some pattern matching the end developer would do it if necessary. I highly doubt that this is being used a lot but just to keep it as an option. We might have some duplicate selectors being called in a loop.
The only problem I see from a type-level is, that we can't discern what to do with value as a call might also return a function (but with different parameters than its own), so then it wouldn't be save to invoke any parameters being typeof value === 'function'.
At least in the App, I'm not aware we are doing this. If call
would return a function, you could get around with returning a function from the mock.
toCall, toSpawn, and toRun are just meant as assertions, with the underlying operation still being executed, right?
Yes, unless they are mocked.
For the final state comparison we might want to provide a lense though to only look at parts of the state. While that can be dangerous (ignoring side-effects on other stores) it will on the plus side help to keep the test focused, such that changing to e.g. the tracking state don't affect nearly all user-interaction tests.
Right, I think we could do this on the top level as there, e.g.: toHaveFinalState({ basket: finalBasketState })
would only validate the basket state, but leave the others ignored. But the basket case should be checked with deep equality and strict, so no mismatch between properties.
Right, I think we could do this on the top level as there, e.g.:
toHaveFinalState({ basket: finalBasketState })
would only validate the basket state, but leave the others ignored. But the basket case should be checked with deep equality and strict, so no mismatch between properties.
Seems possible, though I would like to use another function name for the partial assertion, to make it explicit that the test took a shortcut. Otherwise it's just to easy to forget by accident (when adding new top-level properties on the AppState
).
Just had an interesting case in OMS: How would we go about defining that a saga doesn't dispatch or call something?
Right now all test helpers are build to "pass through" when no mock exists.
For state-modifying things this wouldn't be as big of a deal, since we can always assert on the final state. But for outside side-effects (e.g. a tracking call being made), this might be a worthwhile addition.
How does a saga which modifies state but doesn't dispatch anything work?
So you mean a saga which only calls a method from a library (localytics for example)?
@HenriBeck Yes for example. Or it might also (conditionally) dispatch an action which would trigger another saga, but that is not part of the test and hence we wouldn't see that impact on the final state.
(The concrete use case is not saving something under a certain condition, and if that is not directly done in the reducer but rather through an action in another saga, we don't see it.)
I guess if it's a call
/ run
we could provide a mock value that throws / brings down the test using fail
.
Hmmm, yeah. I'm thinking about less mocking as this makes the test less safe.
I believe it should be possible to mock the network request with nock
for example and let most of the other stuff not mocked. It should be possible to create a version of the store with all of the sagas etc. available, though this could also make tests take longer.
Also for an action which triggers another saga we still can assert on the actual action and the payload.
How would we handle take
in tests? I think you always need to assert a take
in the tests to give the correct action back.
How would we handle
take
in tests? I think you always need to assert atake
in the tests to give the correct action back.
True. I think currently there are no facilities yet to mock take
. I have created #40 to cover that.
What we should also consider is testing functions which get the env injected / BoundEffects currently
I hope we can just drop BoundEffects
with TS 3.4 (the broken type inference being the only reason why it was added in the first place). Or do you have any place where this would actually be nicer than $.run
?
No, I prefer the function as it's easier to understand. Do you know for sure that this fix will come in 3.4 because I think it would make sense for the app team to wait for that then?
I actually already have a mostly finished prototype for this testing. Dropping the BoundEffect would make a lot of things easier.
@HenriBeck Yes, it will happen, it's already merge and they confirmed that our use-case is fixed π
(I think the RC is out, so if I find some time I can update this package to the latest 3.4 RC and see that it's really fixed. Would need to bump the peerDependencies
though, so it would not be suitable for consumption in our apps yet)
The saga testing currently is one function which gets all of the mocks and initial values.
I think a chainable API is easier to read and also to understand when reading such a test. This proposes a chainable API such as
redux-saga-test-plan
.Note: The order of expectations matter. Assuming we have a saga which first calls a function, then dispatches an action and the calls another function.
When the order of expectations would be
toDispatch(action), toCall(firstFunc), toCall(lastFunc)
the saga test would fail.In contrast, if we only expect
toDispatch(action), toCall(lastFunc)
the saga test wouldn't fail as we are allowed to skip functions and actions but not jump expectations.