Plugin for py.test that integrates with github using markers. Integration allows tests to xfail (or skip) based on the status of linked github issues.
Install the plugin using pip
pip install pytest-github
py.test
command-line parameters are available.py.test \
[--github-cfg=GITHUB_CFG] \
[--github-username=GITHUB_USERNAME] \
[--github-token=GITHUB_TOKEN] \
[--github-completed=GITHUB_COMPLETED] \
[--github-summary]
github.yml
that contains your GitHub username and personal api token. A sample file is included below.github:
username: j.doe
token: XXXXXXXXXXXXX
The following py.test
marker is available:
@pytest.mark.github(*args): GitHub issue integration
The marker can be used to influence the outcome of tests. See the examples below for guidance.
Often, when a test fails, one might file a GitHub issue to track the resolution of the problem. Alternatively, you could use the built-in xfail
marker. This is where pytest-github
can be of use. To avoid having to review known failures with each test run, and to avoid always using xfail
, consider the github
marker to dynamically influence the test outcome based on the state of the GitHub issue.
The following example demonstrates using the github
marker to influence the outcome of a known failing test.
@pytest.mark.github('https://github.com/some/open/issues/1')
def test_will_xfail():
assert False
Running this test with py.test
will produce the following output:
test.py::test_will_xfail xfail
To avoid masking additional failures that might be uncovered by a test while a github issue is being resolved, you can restrict expected failures to specific exceptions using the raises
keyword argument:
@pytest.mark.github('https://github.com/some/open/issues/1', raises=ZeroDivisionError)
def test_will_xfail():
foo = 1/0
@pytest.mark.github('https://github.com/some/open/issues/1', raises=ValueError)
def test_will_fail():
# This test has been marked with an open issue but it will still fail
# because the exception raised is different from the one indicated by
# the 'raises' keyword.
foo = 1/0
Running this test with py.test
will produce the following output:
collected 2 items
collected 1 github issues
test.py::test_will_xfail xfail
test.py::test_will_fail FAILED
The following example demonstrates a test that succeeds, despite being associated with an open GitHub issue.
@pytest.mark.github('https://github.com/some/open/issues/1')
def test_will_xpass():
assert True
In this example, the XPASS
outcome (a.k.a. unexpected pass) is used.
test.py::test_will_xpass XPASS
The following example demonstrates a test that succeeds, while it is associated with a closed GitHub issue. When a test associated with a GitHub
@pytest.mark.github('https://github.com/some/closed/issues/2')
def test_will_pass():
assert True
In this example, the PASSED
outcome is used.
test.py::test_will_pass PASSED
The following example demonstrates a test that fails, while it is associated with a closed GitHub issue.
@pytest.mark.github('https://github.com/some/closed/issues/2')
def test_will_fail():
assert False
In this example, the FAILED
outcome is used.
test.py::test_will_fail FAILED
The following example demonstrates a test that fails, while it is associated with an open GitHub issue.
@pytest.mark.github('https://github.com/some/open/issues/1', skip=True)
def test_will_skip():
assert False
In this example, the SKIPPED
outcome is used.
test.py::test_will_skip SKIPPED
In this example, the SKIPPED
outcome is used.
test.py::test_will_skip SKIPPED
The following example demonstrates a parametrize test that uses the github
marker to influence the outcome of a subset of the known failing test.
@pytest.mark.github('https://github.com/some/open/issues/1', ids=['even2', 'even4'])
@pytest.mark.parametrize("count", [1, 2, 3, 4], ids=["odd1", "even2", "odd3", "even4"])
def test_will_xfail(count):
assert count % 2
The --github-summary
option lists all GitHub issues referenced by a github
marker. The list is divided into two sections, Resolved Issues
and Unresolved Issues
, where an issue is considered resolved if it has one of the GITHUB_COMPLETED
labels. Beneath each issue is a listing of all tests that reference the issue.
Sample output:
Unresolved Issues
https://github.com/repo/open/issues/1
- test_suite.py:test_foo
https://github.com/repo/open/issues/2
- test_suite.py:test_bar
Resolved Issues
https://github.com/repo/open/issues/3
- test_suite.py:test_baz
- test_suite.py:test_bah