Open mcflugen opened 1 year ago
Merging #95 (3e86d46) into main (966edac) will decrease coverage by
0.37%
. The diff coverage is89.58%
.
@@ Coverage Diff @@
## main #95 +/- ##
==========================================
- Coverage 92.46% 92.10% -0.37%
==========================================
Files 4 4
Lines 146 190 +44
==========================================
+ Hits 135 175 +40
- Misses 11 15 +4
Impacted Files | Coverage Δ | |
---|---|---|
src/nbmake/pytest_plugin.py | 91.66% <83.33%> (-3.08%) |
:arrow_down: |
src/nbmake/pytest_items.py | 87.30% <84.61%> (-1.16%) |
:arrow_down: |
src/nbmake/nb_run.py | 94.62% <93.10%> (+0.77%) |
:arrow_up: |
Thanks @mcflugen for this contribution, it's a great idea for a feature.
Some notes:
@alex-treebeard Ugh. So sorry for the delay on this!
We should probably namespace metadata fields specific to nbmake like we do for mocks
I've done this so that markers are no specified in the nbmake
namespace,
{
"cells": [ ... ],
"metadata": {
"kernelspec": { ... },
"execution": {
"nbmake": {
"markers": ["slow"]
}
}
}
}
Please add a test to ensure that this does not change the failure mode for notebooks that cannot be read (e.g. because they are not valid JSON) I can help with this if you are not sure how
Maybe you could give me some more guidance on this?
hey @mcflugen - thanks for much for continuing this contribution.
Many people use this library but very few help improve it.
The issue is specifically: I can see that during test collection (the phase before tests are executed) - your new code will throw an unhandled exception if there is an invalid notebook.
My ask is that we err on the side of not breaking existing usage and ignore invalid notebooks at collection time (ideally printing a warning to the user).
In practice, this means you need to run this test, see that it's broken on your end, then push a fix.
Thanks again, and I'm looking forward to merging and releasing this once the tests pass.
EDIT: If you have a better idea for handling invalid json at collection time then happy to hear your approach.
Apologies for the unsolicited pull request. I've added a small bit of functionality to nbmake that we have found to be useful and thought others might as well.
This pull request allows users to mark notebooks in a way similar to what one might do with
@pytest.mark
. Markers are included within a notebook's metadata much like how the allow_errors and timeout options are specified. As an example,would mark this notebook as "slow".
Markers can be either an array of strings (i.e.
["slow", "memory_intensive"
) or a string of comma-separated markers (i.e."slow,memory_intensive"
). Notebooks can then be included or excluded in the tests through the-m
option just as with other pytest tests (e.g.pytest **/*.ipynb -m "slow and not memory_intensive"
would only run notebooks that are slow but not memory intensive).If this is something you think could be a part of nbmake (great!), I can add some tests and write a bit of documentation to go with this. If not, no worries.