aiidateam / aiida-testing

A pytest plugin to simplify testing of AiiDA plugins.
MIT License
5 stars 6 forks source link

mock-code: improve error message for missing test data #43

Closed ltalirz closed 1 year ago

ltalirz commented 3 years ago

Here is an example of a traceback resulting from missing test data on CI.

The error displayed is that some node that was expected from the calculation was not created, but this could have many reasons, which makes it unnecessarily hard to debug.

Of course, the calculation class itself could be more clever about parsing the outputs but ideally we (=aiida-testing) would be able to communicate to pytest that the mock code did encounter an input it did not yet know.

I'm not quite sure about the best way to accomplish this. One could perhaps try to monkey-patch the calculation class of the input plugin when setting up the mock code Code instance, such that it does some extra check after the original _prepare_for_submission but perhaps there is also a less invasive way?

Any ideas @greschd ?

greschd commented 3 years ago

Hmm.. in that example, isn't the main problem that the exception printed by the mock code doesn't make its way into the pytest log?

In principle we can maybe find a way to forward the error up to pytest, but the same problem also occurs when the calculation itself fails, right? As in, you'd also need to dig into the exception that was the root cause of the test failing.

ltalirz commented 3 years ago

Hmm.. in that example, isn't the main problem that the exception printed by the mock code doesn't make its way into the pytest log?

Are you saying that you believe this should happen already or are you wondering how to implement this? It is not clear to me how the exception of the mock executable can be forwarded to pytest.

In principle we can maybe find a way to forward the error up to pytest, but the same problem also occurs when the calculation itself fails, right? As in, you'd also need to dig into the exception that was the root cause of the test failing.

Well, the difference here is that we can know in advance that mock-code will fail. mock-code is anyhow written in python - we can make sure (in python) that the required test data is available before running the executable.

greschd commented 3 years ago

Are you saying that you believe this should happen already or are you wondering how to implement this? It is not clear to me how the exception of the mock executable can be forwarded to pytest.

Neither - I was sort of thinking of a related but somewhat different concern: The test should probably be written in such a way that the report and exit code of the workchain is printed if it fails. Maybe we can add a helper function for that to aiida-testing. And then the workchain itself needs to make sure that critical errors in a calculation end up in its report. That's probably good practice in any case, but not really something relating to the tests directly.

Well, the difference here is that we can know in advance that mock-code will fail.

Hmm, that would involve adding a piece of code "in front of" the actual execution of the code I guess? Right now the mock executable is a "stand-alone" thing and it's not quite so easy to pass that knowledge back and forth. But I'm not sure, maybe a more integrated solution is anyway the better design. I could imagine putting the whole mock code execution into a monkeypatched-in step after _prepare_for_submission.

ltalirz commented 3 years ago

The test should probably be written in such a way that the report and exit code of the workchain is printed if it fails. Maybe we can add a helper function for that to aiida-testing. And then the workchain itself needs to make sure that critical errors in a calculation end up in its report. That's probably good practice in any case, but not really something relating to the tests directly.

I agree that this is good practice, but it would be nice if aiida-testing would not have to rely on people following it.

Hmm, that would involve adding a piece of code "in front of" the actual execution of the code I guess? Right now the mock executable is a "stand-alone" thing and it's not quite so easy to pass that knowledge back and forth.

I was indeed thinking simply of running a piece of code after the _prepare_for_submission and before running the executable - it would check the hash of the inputs (and raise an exception if it wasn't found in the test database and if no "fallback" executable is specified). The mock executable could in principle remain unchanged (resulting in some duplication; one can also move these lines to a central place where mock-code can reuse them), but there would be no need to communicate to/from the executable.

But I'm not sure, maybe a more integrated solution is anyway the better design. I could imagine putting the whole mock code execution into a monkeypatched-in step after _prepare_for_submission.

If that works, it would probably be more elegant.

greschd commented 3 years ago

Yeah, I think if we're already hooked in before the executable is actually run, it would make sense to do everything there.

Can you try setting up just the monkeypatch code (not necessarily implement any functionality, just a way to inject code)? I think that'll help figure out what is possible.

ltalirz commented 3 years ago

After looking a bit into the code, it seems to me that this might be a place to hook in: https://github.com/aiidateam/aiida-core/blob/02248cf3686a0ab89faf1625e0da24d9e33d8cde/aiida/schedulers/scheduler.py#L384-L391

I'll give it a quick try and will report back here.

ltalirz commented 3 years ago

I opened https://github.com/aiidateam/aiida-testing/pull/49 (which so far just illustrates the concept) Would appreciate help on any of the open points as I cannot spend much more time on this

ltalirz commented 1 year ago

This has been addressed by @chrisjsewell in a different way (via surfacing the content of a log file in the test output) in https://github.com/aiidateam/aiida-testing/pull/63