lunarmodules / busted

Elegant Lua unit testing.
https://lunarmodules.github.io/busted/
MIT License
1.39k stars 184 forks source link

dealing with tests that cannot succeeded #665

Open ligurio opened 3 years ago

ligurio commented 3 years ago

Sometimes tests cannot be fixed quickly and you expect to fail. In such cases it's a common practice to mark them accordingly with statuses like XFail or Skip.

A Skip means that you expect your test to pass unless a certain configuration or condition prevents it to run. And XFail means that your test can run but you expect it to fail because there is an implementation problem.

It would be nice to have functionality to set certain test status in a test source code.

Tieske commented 3 years ago

you can use tags and then include/exclude those tags based on the conditions in which you run the tests.

But I might not understand your request exactly...

ligurio commented 3 years ago

@Tieske filtering using tags excludes test from a test report and it is not desired. see how this functionality implemented in pytest - https://docs.pytest.org/en/latest/how-to/skipping.html

Tieske commented 3 years ago

Here's how we do stuff like that (untested):

local platform_it = function(platforms, description, ...)
    if type(platforms) ~= "table" then
        -- plain 'it' call
        return it(platforms, description, ...)
    end

    local platform = get_platform()
    local test = false
    for _, plat in ipairs(platforms or {}) do
        if plat == platform then 
            test = true
            break
        end
    end
    return test and it(description, ...) or pending("[skipping on "..platform.."] "..description, ...)
end

platform_it({ windows, osx }, "a test as usual", function()
    -- test something, only on Windows and OSX, not on Linux
end)

platform_it({ osx, linux }, "another test as usual", function()
    -- test something, only on OSX and Linux, not on Windows
end)
jamessan commented 3 years ago

The important aspect of an xfail test is that it still runs but it's expected to fail.

This is useful to document the expected behavior for a scenario that's know to be failing (e.g., a bug report) but hasn't been fixed yet. If something changes that fixes the test, you're alerted to it because the test passing is treated as a failure.

At that point, you can verify whether the behavior change is intended and simply switch it from "xfail" to a normal test.

DorianGray commented 3 years ago

We might be able to extend the reporting functionality to report on excluded tests maybe? Tags exist specifically so skip, xfail, etc can all be handled the same way anyways

alerque commented 2 years ago

I don't think reporting on excluded tests answers this question, the point of an Xfail test is that they are included but the mode is reversed. They are not excluded, they are run, but the expected output is anything but the declared expectation. That way you sound an alarm if a know-broken test starts passing and you fixed a bug you didn't realize was being affected (for the better).

I don't see way to do that with the tag system. We can include and exclude but not reverse modes.