computationalmodelling / nbval

A py.test plugin to validate Jupyter notebooks
Other
438 stars 50 forks source link

doc: fix skipping certain output types #169

Closed tlvu closed 2 years ago

tlvu commented 3 years ago

Fixes https://github.com/computationalmodelling/nbval/issues/168

Without this fix we have the following error:

py.test --nbval notebooks/hummingbird.ipynb --sanitize-with notebooks/output-sanitize.cfg
================================================================ test session starts =================================================================
platform linux -- Python 3.7.10, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /repos/PAVICS-e2e-workflow-tests
plugins: tornasync-0.6.0.post2, anyio-2.2.0, nbval-0.9.6, dash-1.20.0
collected 0 items
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/_pytest/main.py", line 269, in wrap_session
INTERNALERROR>     session.exitstatus = doit(config, session) or 0
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/_pytest/main.py", line 322, in _main
INTERNALERROR>     config.hook.pytest_collection(session=session)
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR>     return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/pluggy/manager.py", line 87, in <lambda>
INTERNALERROR>     firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/_pytest/main.py", line 333, in pytest_collection
INTERNALERROR>     session.perform_collect()
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/_pytest/main.py", line 620, in perform_collect
INTERNALERROR>     rep = collect_one_node(self)
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/_pytest/runner.py", line 457, in collect_one_node
INTERNALERROR>     ihook.pytest_collectstart(collector=collector)
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR>     return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/pluggy/manager.py", line 87, in <lambda>
INTERNALERROR>     firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "/opt/conda/envs/birdy/lib/python3.7/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "/repos/PAVICS-e2e-workflow-tests/conftest.py", line 2, in pytest_collectstart
INTERNALERROR>     collector.skip_compare += 'text/html', 'application/javascript',
INTERNALERROR> AttributeError: 'Session' object has no attribute 'skip_compare'

=============================================================== no tests ran in 0.01s ================================================================
vidartf commented 3 years ago

The test failure seems relevant. The question is whether the test is wrong, or the code is wrong.

tlvu commented 3 years ago

The test failure seems relevant. The question is whether the test is wrong, or the code is wrong.

FYI I was following this existing test: https://github.com/computationalmodelling/nbval/blob/1397af8a82dc3966db8ba6a706cbf2f0e32db152/tests/test_ignore.py#L12-L14

tlvu commented 3 years ago

The test failure seems relevant. The question is whether the test is wrong, or the code is wrong.

FYI I was following this existing test:

https://github.com/computationalmodelling/nbval/blob/1397af8a82dc3966db8ba6a706cbf2f0e32db152/tests/test_ignore.py#L12-L14

@vidartf I have a feeling the collector object matches each dir/file level in the file path. We just had another AttributeError: 'Session' object has no attribute 'skip_compare' simply because our parent dir also ends with .ipynb, see our recent commit https://github.com/Ouranosinc/PAVICS-e2e-workflow-tests/pull/82/commits/da918ecb843fdb0fc09191610a231557271fde77

Is that the expected behavior of the collector object?

yan12125 commented 3 years ago

Those test failures are quite similar to the matplotlib one discussed in https://github.com/computationalmodelling/nbval/issues/167. There is also a workaround in that issue.

tlvu commented 3 years ago

Those test failures are quite similar to the matplotlib one discussed in #167. There is also a workaround in that issue.

Not sure how this is related. The workaround above it to ignore a deprecation warning. This error is about how to properly configure nbval to skip compare certain cell type. There are other cell type than stderr, like text/html so the deprecation warning ignore will not work.

yan12125 commented 3 years ago

Well, I was responding an earlier comment:

The test failure seems relevant.

From what I understand, the test failure is irrelevant to this change. Instead, it is because the deprecation warning is printed to stderr, so there are Unexpected output fields from running code: {'stderr'}, as shown in this log: https://travis-ci.org/github/computationalmodelling/nbval/jobs/771723312. Ignoring deprecation warnings should make tests pass.

takluyver commented 2 years ago

Thanks!