Open tlvu opened 5 years ago
It's not really well documented, but if you have nbdime installed, you can add the flag --nbdime
to the pytest cmd to use nbdime to visualize the diff between the stored and fresh run. Nbdime will typically give you the full context, but also the diff.
Was this something along the lines of what you were looking for?
Will check out nbdime for sure. I was looking for a way to have a resulting.ipynb file generated as part of the build and save as build artifact so other can look at it and troubleshoot build errors.
Basically give 2 options to user to troubleshoot errors during nightly automated build. Look at the error summary from py.test and look at the resulting.ipynb file if the py.test error is not clear.
Unfortunately nbdime is not what I was looking for. But if the user is able to have access to the resulting.nbval file, then the user can do nbdiff-web original.ipynb resulting.ipynb
and then nbdime will be worth its gold !
So yes, nbval does not currently have a way to save the output, but nbconvert should do that correctly, as you note. Do you consider this a satisfactory solution (if so, you can close this issue), or are you requesting an enhancement of nbval to have a CLI flag for writing out a new notebook with the outputs from its test run? If the latter, I think it sounds reasonable in principle, so a PR implementing it would be very welcome! I could help review it, but I'm not in a position right now to prioritize implementing it myself.
I am requesting an enhancement for nbval to write out a new notebook with the outputs from its test run.
Having nbconvert generate the output is a possible work-around that is not optimal because it is effectively another run of the test suite, meaning doubling the test run time and no guaranty to have exactly the same errors as the original test run. This is the work-around I have implemented right now.
We'll see how much people will scream at this work-around. If too much then I might consider submitting a PR but for now just assume this is just a simple enhancement request, no planning for PR.
So how would you want me to follow up? Close this issue and re-open a new issue as enhancement request or rename this issue title to make it an enhancement request so we preserve all the discussion details?
Edit: work-around implementation by running nbconvert
after py.test
: https://github.com/Ouranosinc/PAVICS-e2e-workflow-tests/blob/ee7e0992699756df137337e8bd10a458404a2078/runtest#L41
Rename would be sufficient. Thanks!
Side note, I believe what I have done here https://github.com/Ouranosinc/PAVICS-e2e-workflow-tests is a great showcase for nbval
. Thanks for the awsome py.test plugin.
nbval must have run the entire .ipynb file to get the new output to compare against the original output.
Is this resulting .ipynb file from the run available somewhere?
It is sometime easier to troubleshoot error and more natural for users used to read notebooks to see the resulting bad .ipynb with the full context of the entire notebook than just looking at the diff given by py.test.
Alternatively, is nbval able to compare the resulting .ipynb file against the original file and then reproduce the summary as if it was run from py.test?
One can generate the resulting .ipynb file unattended this way: