Closed chrisjsewell closed 1 year ago
should we add perhaps one or two small tests, one checking the content of the log file, and one checking that execution fails in the expected way when asking to fail on missing executable?
yep absolutely, just wanted to open this prematurely, so you knew I was working on it π
FYI @ltalirz the other thing I was thinking to do, is allow users to specify a function as a string, to be loaded by https://docs.python.org/3/library/importlib.html#importlib.import_module in the mock code (or perhaps using entry points), that could override the default hashing function,
e.g. to deal with things like the rounding errors in .xyz
files
FYI @ltalirz the other thing I was thinking to do, is allow users to specify a function as a string, to be loaded by https://docs.python.org/3/library/importlib.html#importlib.import_module in the mock code (or perhaps using entry points), that could override the default hashing function, e.g. to deal with things like the rounding errors in
.xyz
files
mock_code
has been in use for a while and the issue raised by @mpougin is the first time this has come up to my knowledge.
Yesterday, she posted the following comparison between two XYZ files https://github.com/lsmo-epfl/aiida-lsmo/issues/102#issuecomment-1322512168 which does not look like a rounding error at all.
If we do determine that this is not a bug but indeed related to randomness at the level of the machine epsilon, then I agree we can implement such a functionality (although I suspect its use will be limited to "experts").
Cheers, the remaining changes look fine; just some minor suggestions (optional). Let me know whether you want to add them
added cheers
cheers
Log run information from mock code execution, and print to stdout on test fixture teardown.
For example:
Also adds
--mock-fail-on-missing
CLI option. This can be used for CI (e.g. GH Action) runs, where you should likely not be wanting to generate new cache data