Closed verhovsky closed 1 year ago
It is already possible to use pytest
with py_test
.
py_test(
name = "unit_test",
srcs = ["test.py"],
main = "test.py",
python_version = "PY3",
srcs_version = "PY3",
deps = TEST_DEPS,
)
test.py
import sys
import pytest
def test_aws_login_without_aws_profile():
assert True
if __name__ == "__main__":
sys.exit(pytest.main([__file__]))
Thanks for the quick reply. I saw someone mention this trick on the mailing list https://groups.google.com/forum/?nomobile=true#!searchin/bazel-discuss/pytest|sort:date/bazel-discuss/d1RKUmyve_Q/hm_Dc3XYCAAJ
If you forget the sys.exit
like the person in that mailing list post, Bazel will never know if the tests in that file fail. It would also just be nice if you didn't need 3-4 cryptic lines of boiler plate for every test file.
@brandjon: it's something I have in my fork of rules_python, using a genrule to create a snippet similar to the one in @groodt comment above.
Do you think it makes sense to add it to experimental/ ?
i added a sample macro for pytest here https://github.com/ali5h/rules_pip/blob/master/defs.bzl#L116
i added a sample macro for pytest here https://github.com/ali5h/rules_pip/blob/master/defs.bzl#L116
PTAL at pull request #243. It takes a slightly different approach. In particular, it does not pull dependency on pytest on its own, to give users more freedom (e.g. whether to use requirements.txt, virtualenv or vendor pytest in repository).
i added that as a sample. there is no need to load the pytest
explicitly, updated the sample to show this
Suppose defining your own BUILD rule is the right solution: semantically it would belong to /contrib (though there isn't one), instead of /experimental, which is more for things that might become official in the future.
This issue specifically is a pytest
'problem', in the sense that the framework that does 'a lot' (and maybe too much in a sense). From the viewpoint of the build system, py_test
rule would not make any assumption about the existence of pytest
or another custom testing framework.
Personally I think having a 10-line main file as @ali5h did is most straightforward and I probably won't even create any custom BUILD rule:
py_test(
name = 'my_target',
srcs = [
...
"//path/to:pytest_main",
] ,
main = "//path/to:pytest_main",
...
)
If the test.py
has relative import code like from .module import xx
, then bazel test would fail.
ModuleNotFoundError: No module named '__main__.module'; '__main__' is not a package
We don't use relative imports anywhere in our Bazel workspace. They're not worth the trouble imo, and I wouldn't be surprised at all if implementation details in Python rules made them not work correctly.
@thundergolfer why do python rules has the imports
. https://docs.bazel.build/versions/master/be/python.html
@doubler We used to use that imports
attr but I think it's a code-smell. For our use cases we've been able to move to absolute imports and remove all use of imports = []
in our BUILD files.
@dayfine makes a solid point, py_test
strictly operates off return codes and the absence of segfaults or something like that taking place during test execution. That the phonetics are identical to pytest
makes the concepts crowd into each other somewhat unavoidably but they are very distinct.
There's also the more generalized approach from what @groodt shares above, that includes the passing of arguments from bazel py_test
targets into pytest via the args
attribute. I've used that with great success rendering additional debug detail on failures and even to factor out deep dependency graph test cases lumped into a parametrized pytest case such that each parameter value becomes its own py_test
target in the BUILD file via a list comp. Where pytest would normally iterate over all test cases (even those that aren't implicated in the the current diff to the repo) bazel is more nimble to only run those test parameters that are implicated by a particular change. That saves a great deal of test execution runtime by leaning on the remote cache for test results.
BUILD
py_test(
name = "test_foo",
srcs = [
"test_foo.py",
],
args = [
"--junit-xml=test_foo_results.xml",
],
deps = [
"pytest",
],
)
test_foo.py
import pytest
import sys
...
if __name__ == "__main__":
# if using 'bazel test ...'
args = sys.argv[1:]
ret_code = pytest.main([__file__, "--verbose"] + args)
# Print relevant debug logs or other artifacts for CI job to have richer context
if ret_code != 0:
print_debug_stuff(args)
# Exit with return code
sys.exit(ret_code)
Actually now we're hitting the case that @double mentioned above...
If the
test.py
has relative import code likefrom .module import xx
, then bazel test would fail.ModuleNotFoundError: No module named '__main__.module'; '__main__' is not a package
The use case hoping to have supported for us is to be able to execute test runs from either the bazel
process as well as the pytest
process. Being able to run from pytest is a convenience for development with respect to debugging and site-packages package development work. It can also be achieved through bazel but one has to do all the folder path hunting and PYTHONPATH
setup around the bazel cache I believe.
With pytest I can drop pdb
breakpoints, but from bazel something forces the debug session to quit
import pdb
pdb.set_trace()
(Pdb)
Traceback (most recent call last):
File ...
File "/usr/lib/python3.6/bdb.py", line 51, in trace_dispatch
return self.dispatch_line(frame)
File "/usr/lib/python3.6/bdb.py", line 70, in dispatch_line
if self.quitting: raise BdbQuit
bdb.BdbQuit
@jxramos RE the "forces the debug session to quit" I've copied below a workaround:
... Unfortunately, this does not work so simply for
bazel test
andpy_test
targets. If you add a breakpoint usingpdb
and run a test that will trigger that breakpoint, you won't be dropped into a REPL the process will crash. As a workaround, change yourpy_test
target topy_binary
and dobazel run
on it instead ofbazel test
. Your test code will run and you’ll be dropped into the pdb debugger:
You should be able "bazel run" a py_test target; no need to change it to py_binary first.
śr., 31 mar 2021 o 01:23 Jonathon Belotti @.***> napisał(a):
@jxramos https://github.com/jxramos RE the "forces the debug session to quite" I've copied below a workaround:
... Unfortunately, this does not work so simply for bazel test and py_test targets. If you add a breakpoint using pdb and run a test that will trigger that breakpoint, you won't be dropped into a REPL the process will crash. As a workaround, change your py_test target to py_binary and do bazel run on it instead of bazel test. Your test code will run and you’ll be dropped into the pdb debugger:
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/bazelbuild/rules_python/issues/240#issuecomment-810641520, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKU4J4LJNAB4HUJPG53LTKLTGJMNVANCNFSM4I7SLRTQ .
-- Paweł Stradomski
@pstradomski I tested using py_test
and pdb
and it did not work for me. If it actually does work, then I'd certainly be interested in finding out what was wrong in my setup.
Dnia poniedziałek, 19 kwietnia 2021 05:36:04 CEST Jonathon Belotti pisze:
@pstradomski I tested using
py_test
andpdb
and it did not work for me. If it actually does work, then I'd certainly be interested in finding out what was wrong in my setup.
So in my toy project I just did
$ bazel run //rssreader/common:tests_util_test_py
and it just worked (without pdb).
I suspect pdb has issues since "bazel run" closes stdin, but then I don't understand how changing to py_binary would help.
It's also possible to do:
$ bazel run //rssreader/common:tests_util_test_py and then $ bazel-bin/rssreader/common/tests_util_test_py
which avoids the problem of stdin takeover, but is not quite friendly.
-- Paweł Stradomski
but then I don't understand how changing to py_binary would help.
I don't either, but it does help. Didn't bother venturing into the source to understand what was up.
Some work has begun here on pytest
support: https://github.com/bazelbuild/rules_python/pull/464
It would be nice to support Bazel's --test_filter
parameter as well. When specified, bazel populates the TESTBRIDGE_TEST_ONLY
env var with the given string which can be used by the test runner to filter executed tests.
I'm using this in my own pytest running in combinations with some changes I've made to the IntelliJ plugin to pass <workspace_relative_path>$<test_function>
as the string, then invoking pytest with something like:
import os
import sys
import pytest
def run(argv=None):
args = list(sys.argv)
filter = os.environ.get("TESTBRIDGE_TEST_ONLY")
if filter:
parts = filter.split("$", 1)
args.append(parts[0]) # path
if len(parts) > 1:
args.extend(["-k", parts[1]]) # function name
else:
args.append(".") # test everything in working directory
return pytest.main(args)
if __name__ == "__main__":
sys.exit(run())
Bazel is throwing ModuleNotFoundError when sys.exit(pytest.main()) is used instead of just pytest.main(). But the same script works in PyCharm.
import os import pytest
def test_true():
assert True == False
if name == " main ": # Left whitespace around keywords to prevent Bold sys.exit(pytest.main())
py_test( name = "test_file", srcs = ["test_file.py"], main = "test_file.py", )
E ModuleNotFoundError: No module named 'main.source'; 'main__' is not a package
source is the directory containing test_file.py & BUILD. WORKSPACE file is kept empty. Do you let me know where I have gone wrong. Proper Indentation is added.
I put up something until it's available in rules_python
. https://github.com/caseyduquettesc/rules_python_pytest
This issue has been automatically marked as stale because it has not had any activity for 180 days. It will be closed if no further activity occurs in 30 days. Collaborators can add an assignee to keep this open indefinitely. Thanks for your contributions to rules_python!
*activities*
this is the solution I'm using currently: https://gist.github.com/betaboon/c1dd785b5ba468b4df4e382eafff969a
This issue has been automatically marked as stale because it has not had any activity for 180 days. It will be closed if no further activity occurs in 30 days. Collaborators can add an assignee to keep this open indefinitely. Thanks for your contributions to rules_python!
Should close this out; see https://github.com/bazelbuild/rules_python/pull/723#issuecomment-1307664330
Ftr here's the solution I've been using happily for some time -
Closing. @arrdem if you want to do a PR with the support I would love to take a look at it.
https://github.com/caseyduquettesc/rules_python_pytest this came up in the slack yesterday, leaving it here for posterity. I note with some amusement it's almost line-for-line what I invented for my repo and what we're now using at Abnormal.
I think this comment https://github.com/caseyduquettesc/rules_python_pytest/blob/main/python_pytest/defs.bzl#L34 captures why having "in the box" support would be hard.
Maybe you could do the same dance as https://github.com/aignas/rules_python/commit/6b54226c923ab4c0cd60736cdf9d82e7db714b48, but sideloading in requirements alongside user code with potential for requirements conflicts seems like trouble. It'd be better to find a way to configure the Python toolchain with a requirement
function/macro but dunno if we can do that.
Another 3p pointer: https://docs.aspect.build/rules/aspect_rules_py/docs/rules#py_pytest_main provides missing glue for pytest, thanks to @mattem and @f0rmiga
Another project that provides pytest
integration is here:
https://github.com/aignas/pytest-bazel
See the doc links in the repo for usage.
py_test
is a Bazel rule andpytest
is a Python library for writing and running testshttps://docs.pytest.org https://github.com/pytest-dev/pytest
I've found two third-party Bazel rules for running tests written with pytest bazel_rules_pex and rules_pyz, but both projects are unmaintaned and don't work anymore. Edit 2020-01-15: see https://github.com/ali5h/rules_pip/blob/b4e94d0ddc95367217b7059bcdb926c159d28602/defs.bzl#L139 as well.
pytest has been mentioned a few times on the bazel mailing list https://groups.google.com/forum/?nomobile=true#!searchin/bazel-discuss/pytest%7Csort:date
I'm not sure what the correct way to add support for it would be. Probably another rule but I don't know what it should be called, because the names are so similar.
pytest
pytest_py_test
py_test_pytest
all seem pretty funny. Or maybe it could be an option forpy_test
or a global one in WORKSPACE.pytest
can run tests written withunittest
(and has nicer output when a test fails), so theoretically you could just change thepy_test
rule to run tests with pytest but that's obviously not happening.