catchorg / Catch2

A modern, C++-native, test framework for unit-tests, TDD and BDD - using C++14, C++17 and later (C++11 support is in v2.x branch, and C++03 on the Catch1.x branch)
https://discord.gg/4CWS9zD
Boost Software License 1.0
18.7k stars 3.05k forks source link

add overload Catch::makeTestInvoker() which takes std::function<void()>. #1668

Open bowie7070 opened 5 years ago

bowie7070 commented 5 years ago

I have a test case which looks similar to the following:

TEST_CASE("name", "[tag]") {

        std::vector<double> const time= {...};
        std::vector<double> const other= {...};

        for (size_t t = 0, T = time.size(); t < T; ++t) {
            for (size_t k = 0, K = other.size(); k < K; ++k) {
                check(t,k);
        }
    }
}

I'd like to be able to make check(t,k) a separate test. I believed if Catch::makeTestInvoker() were overloaded to take a std::function<void()> I would be able to do something like:

void manual_register_test() {
        std::vector<double> const time= {...};
        std::vector<double> const other= {...};

        for (size_t t = 0, T = time.size(); t < T; ++t) {
            for (size_t k = 0, K = other.size(); k < K; ++k) {
                auto const check_tk = [=]() { check(t,k); };
                std::string name_check_tk = /*...*/;
                REGISTER_TEST_CASE(check_tk, name_tk, "[tag]");
        }
    }
}

From what I can tell adding code similar to the following would do:

class test_invoker_std_function : public Catch::ITestInvoker {
    std::function<void()> f_;

    void invoke() const override { f_(); }

public:
    test_invoker_std_function(std::function<void()> f) : f_(f) {}
};
auto makeTestInvoker(std::function<void()> f) noexcept -> Catch::ITestInvoker* {
    return new (std::nothrow) test_invoker_std_function(f);
}
horenmar commented 5 years ago

Why do you want to make it a separate test case though?

bowie7070 commented 5 years ago

I want to create separate test cases to break up long running tests into small tests. We have tests that take many minutes to complete. Unfortunately we have tests that succeed in isolation but fail when run as part of a specific sequence of tests. If the sequence includes one of the long running tests, it is impractical to debug the problem. By breaking the tests apart we have the possibility of finding a sequence of tests that is quick enough to run that is practical to debug.

A second reason to break the tests apart is because our tests take hours to complete it's only practical to run a fraction of the tests during the day while making modifications. So we tend to run them in random order to attempt to maximize coverage of our different systems. The long running tests defeat that because we get stuck testing one system for a long time.

When we need to update our database of test results it's also easier when the tests are smaller rather larger. We can use sections for that but sections don't help with the first two issues.

On Sun., 30 Jun. 2019, 5:58 pm Martin Hořeňovský, notifications@github.com wrote:

Why do you want to make it a separate test case though?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/catchorg/Catch2/issues/1668?email_source=notifications&email_token=AB44FLE7HTSUKXXWGSUAXDLP5BRSFA5CNFSM4H3DH6ZKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODY4HJ3Y#issuecomment-507016431, or mute the thread https://github.com/notifications/unsubscribe-auth/AB44FLHNNPQG4WX74OVT543P5BRSFANCNFSM4H3DH6ZA .

katusk commented 5 years ago

Great, I am not alone. @bowie7070 I did similarly inheriting from Catch::ITestInvoker tossing in an std::function<void()> as member for my use-case.

Basically I want data-parametrized test cases where each piece of input data is dynamically generated (read from a test input data file in my case run-time) and for each data point a new unit test with a proper name is created reflecting the input parameter values. (You can do this pretty easily in e.g. C#/xUnit, see https://andrewlock.net/creating-parameterised-tests-in-xunit-with-inlinedata-classdata-and-memberdata/#loading-data-from-a-property-or-method-on-a-different-class)

Catch2 data generators (see GENERATE macro) are not sufficient in this case as they do not create new test cases (you put them under a dedicated TEST_CASE...).

And you cannot use raw function pointers as you need some kind of state to be passed over to the Catch2 invoker via REGISTER_TEST_CASE (you cannot convert capturing lambdas to raw function pointers). If you allow std::function<void()>s (capturing lambdas can be converted to this type) for REGISTER_TEST_CASE, then you can programmatically register individual test cases with any name you want running the same function with different input parameters.

@horenmar Anyway, if I use e.g. Catch2's JUnit reporter and hook up that JUnit XML output with our CI then I want to see a thousand or so separate test cases where I can immediately see what the input parameters were, instead of looking at a single test case where all of the actual data points tested is kind of hidden. Plus if one data point fails the checks then I do not want a single failing test case for my purposes, I want one failing test case and the others green. Especially in the failing case it helps a lot if you immediately see the exact input parameters used (assuming the test case names actually contain the input values serialized).