ploomber / ploomber-engine

A toolbox 🧰 for Jupyter notebooks 📙: testing, experiment tracking, debugging, profiling, and more!
https://engine.ploomber.io
BSD 3-Clause "New" or "Revised" License
66 stars 14 forks source link

Cleaned Notebook output and cell index #62

Closed mehtamohit013 closed 1 year ago

mehtamohit013 commented 1 year ago

Describe your changes

Cleaned notebook output and cell index before executing

Issue ticket number and link

Closes #59

Checklist before requesting a review


:books: Documentation preview :books:: https://ploomber-engine--62.org.readthedocs.build/en/62/

edublancas commented 1 year ago

is this ready for review? if so, please don't forget to request it from the right bar so we can get to it!

mehtamohit013 commented 1 year ago

I have wrote the test but the test is failing due to the fact that, whenever a cell crashes in execute_cell(), the function will give me an error, because every result of cell is checked using result.raise_error() and the test will fail. I can add a parameter to execute() for this single test case like bypass_error which will run all the cells despite the earlier cell fail or succeed

edublancas commented 1 year ago

I think this should work:

with pytest.raises(SomeError):
    execute_notebook()

# check that the notebook is cleaned up

this will capture the error and allow you to test the notebook.

mehtamohit013 commented 1 year ago

As soon, execute() throws the error, pytest is terminating the test

 with pytest.raises(NameError):
        out = client.execute()
        assert out == ......      

Here, assert statement is not checked

edublancas commented 1 year ago

ensure that the argument passed to .raises is the same as the error the notebook raises.

here's an example that follows that recipe: https://github.com/ploomber/ploomber-engine/blob/ecbcbff6dac74481f1e30062659b6c70fbb0b643/tests/test_ipython.py#L109

mehtamohit013 commented 1 year ago

Hi @edublancas , I have added the changes in changelog and edited the version to 0.0.25, but the test is still failing. Can you please suggest me what went wrong ?

edublancas commented 1 year ago

we're following semver so we have a script that checks if there are any API breaking changes and requires us to bump a major version release. In this case, the change you're making won't break the API so you can put it as [Fix] that will fix the problem

edublancas commented 1 year ago

great work!