PyPI | Usage | Global state | How does it work? | Changes | Reference | Development | Related packages | License
ipytest
allows you to run Pytest in Jupyter notebooks.
ipytest
aims to give access to the full pytest
experience and to make it
easy to transfer tests out of notebooks into separate test files.
Install ipytest
by running
pip install ipytest
The suggested way to import ipytest
is
import ipytest
ipytest.autoconfig()
Afterwards in a new cell, tests can be executed as in
%%ipytest -qq
def test_example():
assert [1, 2, 3] == [1, 2, 3]
This command will first delete any previously defined tests, execute the cell
and then run pytest. For further details on how to use ipytest
see the
example notebook or the reference below.
There are multiple sources of global state when using pytest inside the notebook:
%%ipytest
per default deletes any previously defined
tests. As an alternative the ipytest.clean()
function allows to delete previously defined tests.ipytest
offers the
ipytest.force_reload()
function. The autoreload
extension of IPython may also help here. To test local packages, it is
advisable to install them as development packages, e.g., pip install -e .
.ipytest.autoconfig(run_in_thread=True)
.In its default configuration (via autoconfig()
), ipytest
performs the
following steps:
ipytest
will create a
temporary file in the current directory and remove if afterwards.NOTE: Some notebook implementations modify the core IPython package and
magics may not work correctly (see here or here). In
this case, using ipytest.run()
and
ipytest.clean()
directly should still work as expected.
autoconfig
| %%ipytest
| config
| exit_code
| run
| clean
| force_reload
| Error
| ipytest.cov
ipytest.autoconfig(rewrite_asserts=<default>, magics=<default>, clean=<default>, addopts=<default>, run_in_thread=<default>, defopts=<default>, display_columns=<default>, raise_on_error=<default>, coverage=<default>)
Configure ipytest
with reasonable defaults.
Specifically, it sets:
addopts
: ('-q', '--color=yes')
clean
: '[Tt]est*'
coverage
: False
defopts
: 'auto'
display_columns
: 100
magics
: True
raise_on_error
: False
rewrite_asserts
: True
run_in_thread
: False
See ipytest.config
for details.
%%ipytest ...
IPython magic to first execute the cell, then execute ipytest.run()
.
Note: the magics are only available after running
ipytest.autoconfig()
or
ipytest.config(magics=True)
.
It cleans any previously found tests, i.e., only tests defined in the
current cell are executed. To disable this behavior, use
ipytest.config(clean=False)
.
Any arguments passed on the magic line are interpreted as command line arguments to to pytest. For example calling the magic as
%%ipytest -qq
is equivalent to passing -qq
to pytest. The arguments are formatted using
Python's standard string formatting. Currently, only the {MODULE}
variable
is understood. It is replaced with the filename associated with the
notebook. In addition node ids for tests can be generated by using the test
name as a key, e.g., {test_example}
will expand to
{MODULE}::test_example
.
The keyword arguments passed to ipytest.run()
can be
customized by including a comment of the form # ipytest: arg1=value1, arg=value2
in the cell source. For example:
%%ipytest {MODULE}::test1
# ipytest: defopts=False
is equivalent to ipytest.run("{MODULE}::test1", defopts=False)
. In this
case, it deactivates default arguments and then instructs pytest to only
execute test1
.
NOTE: In the default configuration %%ipytest
will not raise
exceptions, when tests fail. To raise exceptions on test errors, e.g.,
inside a CI/CD context, use ipytest.autoconfig(raise_on_error=True)
.
ipytest.config(rewrite_asserts=<keep>, magics=<keep>, clean=<keep>, addopts=<keep>, run_in_thread=<keep>, defopts=<keep>, display_columns=<keep>, raise_on_error=<keep>, coverage=<default>)
Configure ipytest
To update the configuration, call this function as in:
ipytest.config(rewrite_asserts=True)
The following settings are supported:
rewrite_asserts
(default: False
): enable ipython AST transforms
globally to rewrite assertsmagics
(default: False
): if set to True
register the ipytest magicscoverage
(default: False
): if True
configure pytest
to collect
coverage information. This functionality requires the pytest-cov
package
to be installed. It adds --cov --cov-config={GENERATED_CONFIG}
to the
arguments when invoking pytest
. WARNING: this option will hide
existing coverage configuration files. See ipytest.cov
for detailsclean
(default: [Tt]est*
): the pattern used to clean variablesaddopts
(default: ()
): pytest command line arguments to prepend to
every pytest invocation. For example setting
ipytest.config(addopts=['-qq'])
will execute pytest with the least
verbosity. Consider adding --color=yes
to force color outputrun_in_thread
(default: False
): if True
, pytest will be run a
separate thread. This way of running is required when testing async code
with pytest_asyncio
since it starts a separate event loopdefopts
(default: "auto"
): either "auto"
, True
or False
"auto"
, ipytest
will add the current notebook module to the
command line arguments, if no pytest node ids that reference the
notebook are provided by the userTrue
, ipytest will add the current module to the arguments passed
to pytestFalse
only the arguments given and adopts
are passed to pytestdisplay_columns
(default: 100
): if not False
, configure pytest to
use the given number of columns for its output. This option will
temporarily override the COLUMNS
environment variable.raise_on_error
(default False
): if True
,
ipytest.run
and %%ipytest
will raise
an ipytest.Error
if pytest fails.ipytest.exit_code
The return code of the last pytest invocation.
ipytest.run(*args, module=None, plugins=(), run_in_thread=<default>, raise_on_error=<default>, addopts=<default>, defopts=<default>, display_columns=<default>, coverage=<default>)
Execute all tests in the passed module (defaults to __main__
) with pytest.
This function is a thin wrapper around pytest.main
and will execute any tests
defined in the current notebook session.
NOTE: In the default configuration ipytest.run()
will not raise
exceptions, when tests fail. To raise exceptions on test errors, e.g.,
inside a CI/CD context, use ipytest.autoconfig(raise_on_error=True)
.
Parameters:
args
: additional commandline options passed to pytestmodule
: the module containing the tests. If not given, __main__
will
be used.plugins
: additional plugins passed to pytest.The following parameters override the config options set with
ipytest.config()
or
ipytest.autoconfig()
.
run_in_thread
: if given, override the config option "run_in_thread".raise_on_error
: if given, override the config option "raise_on_error".addopts
: if given, override the config option "addopts".defopts
: if given, override the config option "defopts".display_columns
: if given, override the config option "display_columns".Returns: the exit code of pytest.main
.
ipytest.clean(pattern=<default>, *, module=None)
Delete tests with names matching the given pattern.
In IPython the results of all evaluations are kept in global variables unless explicitly deleted. This behavior implies that when tests are renamed the previous definitions will still be found if not deleted. This method aims to simply this process.
An effective pattern is to start with the cell containing tests with a call
to ipytest.clean()
, then defined all test cases, and
finally call ipytest.run()
. This way renaming tests works
as expected.
Parameters:
pattern
: a glob pattern used to match the tests to delete. If not given,
the "clean"
config option is used.items
: the globals object containing the tests. If None
is given, the
globals object is determined from the call stack.ipytest.force_reload(*include, modules: Optional[Dict[str, module]] = None)
Ensure following imports of the listed modules reload the code from disk
The given modules and their submodules are removed from sys.modules
.
Next time the modules are imported, they are loaded from disk.
If given, the parameter modules
should be a dictionary of modules to use
instead of sys.modules
.
Usage:
ipytest.force_reload("my_package")
from my_package.submodule import my_function
ipytest.Error(exit_code)
Error raised by ipytest on test failure
ipytest.cov
A coverage.py plugin to support coverage in Jupyter notebooks
The plugin must be enabled in a .coveragerc
next to the current notebook or
the pyproject.toml
file. See the coverage.py docs
for details. In case of a .coveragerc
file, the minimal configuration reads:
[run]
plugins =
ipytest.cov
With this config file, coverage information can be collected using pytest-cov with
%%ipytest --cov
def test():
...
ipytest.autoconfig(coverage=True)
automatically adds the --cov
flag and the
path of a generated config file to the Pytest invocation. In this case no
further configuration is required.
There are some known issues of ipytest.cov
ipytest.cov.translate_cell_filenames(enabled=True)
Translate the filenames of notebook cells in coverage information.
If enabled, ipytest.cov
will translate the temporary file names generated
by ipykernel (e.g, ipykernel_24768/3920661193.py
) to their cell names
(e.g., In[6]
).
Warning: this is an experimental feature and not subject to any stability guarantees.
Setup a Python 3.10 virtual environment and install the requirements via
pip install -r requirements-dev.txt
pip install -e .
To execute the unit tests of ipytest
run
python x.py test
python x.py integration
Before committing, execute python x.py precommit
to update the documentation,
format the code, and run tests.
To create a new release execute:
python x.py release
ipytest
is designed to enable running tests within an interactive notebook
session. There are also other packages that aim to use test full notebooks:
these packages run the notebook and compare the output of cells to the output of
previous runs. These packages include:
While PyTest itself is generally supported, support for PyTest plugins depends very much on the plugin. The following plugins are known to not work:
See ipytest.cov
on how to use ipytest
with
pytest-cov.
Please create an issue, if I missed a packaged or mischaracterized any package.
The MIT License (MIT)
Copyright (c) 2015 - 2024 Christopher Prohm
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.