Closed liamhuber closed 1 month ago
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
Coverage variation | Diff coverage |
---|---|
:white_check_mark: +0.04% (target: -1.00%) | :white_check_mark: 95.95% |
Codacy stopped sending the deprecated coverage status on June 5th, 2024. Learn more
Files with Coverage Reduction | New Missed Lines | % | ||
---|---|---|---|---|
nodes/composite.py | 13 | 92.49% | ||
node.py | 24 | 90.94% | ||
<!-- | Total: | 37 | --> |
Totals | |
---|---|
Change from base Build 11095845803: | 0.04% |
Covered Lines: | 3088 |
Relevant Lines: | 3371 |
When I went to actually test this, I discovered that executorlib
(at least how I'm running it!) is killing the slurm jobs when the Executor
dies, so they are not persistent after all. It's possible this is merely user error on my end and there's a way to flag the jobs as persistent (corresponding issue https://github.com/pyiron/executorlib/issues/412), and if so this will still be immediately useful.
Otherwise, there is no handy-dandy persistent-job executor floating around, in which case there's nothing fundamentally wrong with the work here, but it would be a lot of complication for no immediate benefit and I would rather close it.
Ok, it turns out the way forward with executorlib
is it's FileExecutor
and cache
module, as shown here: https://github.com/pyiron-dev/remote-executor
This is a complex enough interface that I don't have time to explore it now. I think this is probably still the way forward for submitting an individual node in a graph off to slurm, and I think you still need the last-minute serialization introduced in this PR, so I'm going to leave this open but draft. In the meantime submitting entire graphs to slurm at once is working fine.
I tried running the pysqa
+executorlib.FileExecutor
example as-written on cmmc and it just hung. This is almost certainly something simple like a dependency mismatch compared to the binder env, but TBH I don't even want to fight with right now, so instead I took five minutes to make a laughably bad child of concurrent.futures.Executor
that runs something independent of the parent python process and used that. The HPC_example.ipynb
language has been updated to further clarify that the examples there are proofs-of-concept and not intended as real or long term interfaces, which should certainly use pysqa
and may use FileExecutor
.
To work with long-duration nodes on executors that survive the shutdown of the parent workflow/node python process (e.g.
executorlib
using slurm), we need to be able to tell the run paradigm to serialize the results, and to try to load such a serialization if we come back and the node is running.This introduces new attributes
Node.serialize_results
to trigger the result serialization, and a privateNode._do_clean
to let power users (i.e. me writing the unit tests) stop the serialized results (and any empty directories) from getting cleaned up automatically at read-time.Under the hood,
Node
now directly implementsRunnable.on_run
andRunnable.run_args
leveraging the new detached path from #457 to make sure that each run has access to a semantically relevant path for writing the temporary output file (using cloudpickle). Child classes ofNode
implement new abstract methodsNode._on_run
andNode._run_args
in place of the previousRunnable
abstract methods they implemented.Still needs work with saving and reloading the parent node. E.g. it will presumably also be re-loaded in the "running" state, but we'd like it to be pretty easy to keep going -- maybe
Composite.resume()
?EDIT:
Instead of manually saving a checkpoint, this now just leans on the recovery file getting written when the parent python process gets shut down. There's also no hand-holding around updating the failed status or cache usage of such shut-down nodes.
Overall I'm a little sad about the added layer of misdirection where
Runnable.on_run
is abstract and implemented byNode.on_run
to handle the result serialization, thenNode._on_run
is a new abstract... but it is verbose more than complex, so I can grumpily live with it.Tests leverage the flags manually to spoof the behaviour, but a live test in a read-only nodebook shows how the basic operation is working.