asomers / mockall

A powerful mock object library for Rust
Apache License 2.0
1.45k stars 61 forks source link

Feature request: Allow manually verifying expectations without requiring mutability #598

Open Plebshot opened 1 month ago

Plebshot commented 1 month ago

Problem

I'm having an issue with the fact that mock assertions are exclusively verified on drop. In particular, if the mock is behind an Arc that is used in a detached async task, a test won't fail as the panic doesn't propagate to the test thread. This is somewhat related to https://github.com/asomers/mockall/issues/461 and https://github.com/asomers/mockall/issues/396.

While this can be circumvented in some use cases by joining all tasks manually or using interior mutability for the mocked object, I fail to find a satisfying solution for my use case.

Proposal

Add a method like verify(&self) that allows checking all expectations without requiring mutability.

This method could be called on mocks behind an Arc, without having to worry about the logistics of where the last reference ends up dropping the mock.

Context

Conceptional code:

use actix_web::{http::StatusCode, test::TestRequest, web, App, HttpResponse};
use std::sync::Arc;
use tokio::sync::mpsc;

struct RequestMetricsRecord {}

#[mockall::automock]
trait MetricsSystem: Send + Sync + 'static {
    fn record(&self, record: RequestMetricsRecord);
    // ...
}

#[derive(Clone)]
struct MetricsMiddleware {
    recorder: mpsc::Sender<RequestMetricsRecord>,
}

impl MetricsMiddleware {
    /// Creates a new factory for the actual middleware.
    ///
    /// This spawns a background task to collect the metrics for better response times.
    fn new(metrics: Arc<dyn MetricsSystem>) -> Self {
        let (recorder, mut receiver) = mpsc::channel(100);
        // Note: Task is detached because there is no logical point to join it.
        // It exits by itself once the last sender for the channel is dropped by the Actix server.
        tokio::spawn(async move {
            while let Some(record) = receiver.recv().await {
                metrics.record(record);
            }
        });
        Self { recorder }
    }
}

// Implement the actual middleware logic that records incoming HTTP requests.
// Omitted for brevity - simply sends metrics over the channel to the above background task.
// ...

#[actix_web::test]
async fn test_middleware_records_metrics() {
    // Setup a mock that expects one call to record with a 200 status code.
    let mut mock_metrics = MockMetricsSystem::new();
    mock_metrics
        .expect_record()
        .withf(|record| record.status == StatusCode::OK)
        .once()
        .return_const(());
    let shared_metrics = Arc::new(mock_metrics);

    // Configure a server that should trigger the middleware to call the mock.
    let app = actix_web::test::init_service(
        App::new()
            .wrap(MetricsMiddleware::new(shared_metrics.clone()))
            .route("/ok", web::get().to(HttpResponse::Ok)),
    )
    .await;

    // Make a request to trigger the middleware
    app.call(TestRequest::get().uri("/ok").to_request()).await.unwrap();
}

Note that even adding drop(app) manually, to ensure the actix server shuts down before the mock is dropped doesn't work here because the detached background task lives longer than the drop.

Potential Workarounds


If you have any other suggestions to solve this I'm happy to hear them.

Thanks for the great work on this library!

asomers commented 1 month ago

One way or another, your test needs to ensure that the background task doesn't panic. Otherwise those background tasks are big test escapes. It's not just Mockall that might cause a panic there.

Plebshot commented 1 month ago

Thanks for the quick reply!

While I agree with that in principle, I believe this is already ensured here:

  1. Any implementation of MetricsSystem would already test that its record method does not panic. So unless the tokio receiver suddenly panics, it is in fact only Mockall that can cause a panic here. Any more complex logic for the background task would also be extracted into it's own function to test separately.
  2. In the worst case where the background task does end up panicking despite that, it is detected and explicitly handled by anyone trying to use the sender, as sending without any open receivers yields an error.

I'm aware this design isn't perfect in the first case, but I'm also on the side that simple solutions often result in less errors. So it's unfortunate that the current API for Mockall somewhat restricts how you can structure your code here.

Are there any specific reasons why checking the expectations has to be mutable, other than also wanting to reset them in the checkpoint method? Sadly I can't really wrap my head around all the macros to make a contribution here.

asomers commented 1 month ago

Yes, the checkpoint method requires mutable access so it can clear current expectations. BTW, if you really really really don't want to assert that your background task didn't panic, other options are:

Plebshot commented 1 month ago

The oneshot channel in the mock return is a great tip to prevent the test exiting before the background task had the chance to call record (note: Has to be return_once, not returning due to FnMut). I thought about unwrapping the Arc too. It works, but it also introduces some unsatisfying boiler plate once more.

Yes, the checkpoint method requires mutable access so it can clear current expectations.

The point of something like a verify, assert or satisfied function on a mock would be that it doesn't clear the current expectations. All it should do is check if all expectations are satisfied at the current point in time, and panic (or otherwise indicate) if they're not. I'd imagine this is pretty much 1:1 the logic that is already present in the drop for an expectation, or are there any other complications here?

asomers commented 1 month ago

The point of something like a verify, assert or satisfied function on a mock would be that it doesn't clear the current expectations. All it should do is check if all expectations are satisfied at the current point in time, and panic (or otherwise indicate) if they're not. I'd imagine this is pretty much 1:1 the logic that is already present in the drop for an expectation, or are there any other complications here?

It's not quite the same as the logic that's already present. The existing checkpoint logic simply drops the Expectation objects, so you would need to do something different. Would you like to submit a patch?

FTR, the reason that the checkpoint method clears existing expectations, rather than just checks that they are satisfied, is due to the precedent set by the Mockers crate.

Plebshot commented 1 month ago

I haven't worked with Macros in Rust yet, so I'm having a hard time understanding the library enough to figure out where to add a function like this, or what it has to check in particular.

What I meant initially was moving this part of the drop implementation into it's own fn assert(&self) (or whatever you want to call it) and adding an equally named method on the generated Mocks, that calls this for all expectations.

asomers commented 1 month ago

Yes, that would have to be a part of your patch.

Plebshot commented 1 month ago

Is there any chance you could get around to adding this, or help me with the initial scaffolding for the macro related code? Otherwise, I'm not sure when I'll get enough time to solve this on my own.

Would be greatly appreciated.

asomers commented 1 month ago

Probably not very soon, I'm afraid. My open-source time is limited right now, and there are a lot of different projects placing demands on it.