Open Plebshot opened 1 month ago
One way or another, your test needs to ensure that the background task doesn't panic. Otherwise those background tasks are big test escapes. It's not just Mockall that might cause a panic there.
Thanks for the quick reply!
While I agree with that in principle, I believe this is already ensured here:
MetricsSystem
would already test that its record
method does not panic. So unless the tokio receiver suddenly panics, it is in fact only Mockall that can cause a panic here. Any more complex logic for the background task would also be extracted into it's own function to test separately.I'm aware this design isn't perfect in the first case, but I'm also on the side that simple solutions often result in less errors. So it's unfortunate that the current API for Mockall somewhat restricts how you can structure your code here.
Are there any specific reasons why checking the expectations has to be mutable, other than also wanting to reset them in the checkpoint
method? Sadly I can't really wrap my head around all the macros to make a contribution here.
Yes, the checkpoint
method requires mutable access so it can clear current expectations. BTW, if you really really really don't want to assert that your background task didn't panic, other options are:
Arc::try_unwrap()
on the mock object and then drop it. You'll have to do that in a polling loop to ensure that the background task has actually terminated.futures::channel::oneshot
. Move the sender into the mock object like mock.expect_record().returning(|| sender.send(()).unwrap())
and then await the receiver in the test function.The oneshot channel in the mock return is a great tip to prevent the test exiting before the background task had the chance to call record (note: Has to be return_once
, not returning
due to FnMut). I thought about unwrapping the Arc too. It works, but it also introduces some unsatisfying boiler plate once more.
Yes, the checkpoint method requires mutable access so it can clear current expectations.
The point of something like a verify
, assert
or satisfied
function on a mock would be that it doesn't clear the current expectations. All it should do is check if all expectations are satisfied at the current point in time, and panic (or otherwise indicate) if they're not. I'd imagine this is pretty much 1:1 the logic that is already present in the drop for an expectation, or are there any other complications here?
The point of something like a
verify
,assert
orsatisfied
function on a mock would be that it doesn't clear the current expectations. All it should do is check if all expectations are satisfied at the current point in time, and panic (or otherwise indicate) if they're not. I'd imagine this is pretty much 1:1 the logic that is already present in the drop for an expectation, or are there any other complications here?
It's not quite the same as the logic that's already present. The existing checkpoint
logic simply drops the Expectation
objects, so you would need to do something different. Would you like to submit a patch?
FTR, the reason that the checkpoint
method clears existing expectations, rather than just checks that they are satisfied, is due to the precedent set by the Mockers crate.
I haven't worked with Macros in Rust yet, so I'm having a hard time understanding the library enough to figure out where to add a function like this, or what it has to check in particular.
What I meant initially was moving this part of the drop implementation into it's own fn assert(&self)
(or whatever you want to call it) and adding an equally named method on the generated Mocks, that calls this for all expectations.
Yes, that would have to be a part of your patch.
Is there any chance you could get around to adding this, or help me with the initial scaffolding for the macro related code? Otherwise, I'm not sure when I'll get enough time to solve this on my own.
Would be greatly appreciated.
Probably not very soon, I'm afraid. My open-source time is limited right now, and there are a lot of different projects placing demands on it.
Problem
I'm having an issue with the fact that mock assertions are exclusively verified on drop. In particular, if the mock is behind an Arc that is used in a detached async task, a test won't fail as the panic doesn't propagate to the test thread. This is somewhat related to https://github.com/asomers/mockall/issues/461 and https://github.com/asomers/mockall/issues/396.
While this can be circumvented in some use cases by joining all tasks manually or using interior mutability for the mocked object, I fail to find a satisfying solution for my use case.
Proposal
Add a method like
verify(&self)
that allows checking all expectations without requiring mutability.This method could be called on mocks behind an Arc, without having to worry about the logistics of where the last reference ends up dropping the mock.
Context
Conceptional code:
Note that even adding
drop(app)
manually, to ensure the actix server shuts down before the mock is dropped doesn't work here because the detached background task lives longer than the drop.Potential Workarounds
MetricsSystem
behind a Mutex to allow usingcheckpoint()
on the mock. Not an acceptable performance overhead if it's just needed for tests.SyncMockMetricsSystem
that wrapsMockMetricsSystem
in an Mutex and manually reimplements each method of the trait. Introduces boilerplate and might cause issues due to having a mutex only in tests.#[test]
and create/drop the async runtime manually? Not entirely sure if that really solves the issue, but it would also introduce boilerplate for each test again.If you have any other suggestions to solve this I'm happy to hear them.
Thanks for the great work on this library!