Closed liamhuber closed 9 months ago
I didn't have time today to read in depth, but will do so before our meeting tomorrow.
Super. Indeed, if the academic agenda permits, spending time talking about this stuff live would be both enjoyable and helpful!
One thing that occurs to me right now: to_storage/from_storage is very similar to the Storable interface. I guess implementing
restore
would be more difficult in your setup because you wanted the objects to be live before populating from storage, but it would make other things much easier (no need to keep track of class names in storage, it already saves a simple version tag). If this is a blocker I would rather discuss what needs to change in Storable to enable it to be used here.
So there are two fundamental objects we're storing: DataChannel
and Node
. The channels are owned by nodes, and take their parent node as an arg
at instantiation time. This sort of reciprocal link is super convenient when everything is alive, and making it a mandatory arg is perfectly sensible under those conditions, but it's hell for storage. So, for DataChannel
I think we're stuck defining some sort of bespoke data-extraction routine. Thankfully this is so far quite simple.
For Node
I tend to agree with you. I did try sticking Storable
into the MRO, but ran into two problems, both of which are probably resolvable:
Storable
and surrounding classes, and probably not a fundamental problem. We're doing all sorts of nonsense with decorators and dynamically generated classes, but I'm still cautiously optimistic the only real problem was my idiocy.Node
inherits from it directly then I have to import at boot time and can't delay it until I go out of my way to call .save()
. This is resolvable by getting tinybase
out on its own somewhere, so is only a short-term barrier. But I don't want to merge and release a solution that requires the "pyiron" boot time, as right now getting running with workflows is pretty snappy.Coverage variation | Diff coverage |
---|---|
:x: -2.76% (target: -1.00%) | :white_check_mark: 35.48% |
You may notice some variations in coverage metrics with the latest Coverage engine update. For more details, visit the documentation
We've detected an issue with your CI configuration that might affect the accuracy of this pull request's coverage report. To ensure accuracy in future PRs, please see these guidelines. A quick fix for this PR: rebase it; your next report should be accurate.
Totals | |
---|---|
Change from base Build 7576987663: | -1.3% |
Covered Lines: | 4799 |
Relevant Lines: | 5392 |
Ok, so we can now get the storage file itself from the parent-most node using #165, and then the path inside the storage to a particular child using #167. That means that children look in the right place in the root-parent's folder and we can do things like:
>>> wf.ev.calc.storage["outputs/energy_dict/strict_hints"]
1
This is nice.
Tangentially, this led me to the question: @pmrv are we still struggling with restoring bools instead of 0/1, or is there just a bug somewhere? Should I still care about this?
Also @pmrv: with the failing tests on pyiron_contrib:tinydata
and the fairly deep-ish integration (I need to pull out pyiron_base...HasGroups
too), I am considering delaying all the TODOs with moving stuff from repo to repo until a later PR. That would mean sticking with the to_storage
and from_storage
methods for now and introducing tests at the save/load interface level. The big "pro" for me here is to get functioning storage live on main
as an alpha feature. Is the palatable to you?
Long run I still am expecting to get Storable
in the MRO, I just want to introduce an intermediate state to main
where we have storage without that.
I think the bool issue is related using json for dicts inside h5io, but I would have to check.
Test failure for tinybase is probably contained to the notebooks, so it should be quick on Monday, but if you want thus in main before, go ahead.
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
"Re-run all jobs" is not re-triggering the CI. I'm just going to try closing and reopening.
h5io_browser
hard-pins an upper limit on the patch version of h5io
, so the CI still can't run. Jan already opened version bump PR open on h5io_browser
so I expect this will resolve itself quickly.
Could not solve for environment specs
The following package could not be installed
└─ pyiron_contrib 0.1.15** does not exist (perhaps a typo or a missing channel).
It does though. Something on the internet must just not have updated yet. I'll try again in a few minutes.
Could not solve for environment specs The following package could not be installed └─ pyiron_contrib 0.1.15** does not exist (perhaps a typo or a missing channel).
It does though. Something on the internet must just not have updated yet. I'll try again in a few minutes.
0.1.15 is the most recent version showing on anaconda.org, but 0.1.14 is the most recent on my local machine using conda search -c conda-forge -f pyiron_contrib
, so I guess conda is just updating different parts of itself at different times and this is not github's fault at all.
The "rerun failed jobs" button for the CI report and "rerun job" for individual jobs are both doing nothing, so I am going to close and reopen...
There's some sort of pathing issue going on in the CI environment compared to my local machine; everything fails at the first node instantiation with errors of the form:
======================================================================
ERROR: test_run_data_tree (test_node.TestNode)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/share/miniconda3/envs/my-env/lib/python3.10/pathlib.py", line 1175, in mkdir
self._accessor.mkdir(self, mode)
FileNotFoundError: [Errno 2] No such file or directory: 'start'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/runner/work/pyiron_workflow/pyiron_workflow/tests/unit/test_node.py", line 64, in setUp
self.n1 = ANode("start", x=0)
File "/home/runner/work/pyiron_workflow/pyiron_workflow/pyiron_workflow/snippets/has_post.py", line 16, in __call__
post(instance, *args, **kwargs)
File "/home/runner/work/pyiron_workflow/pyiron_workflow/pyiron_workflow/node.py", line 362, in __post__
do_load = self.storage.has_contents
File "/home/runner/work/pyiron_workflow/pyiron_workflow/pyiron_workflow/storage.py", line 101, in has_contents
has_contents = self._tinybase_storage_is_there or self._h5io_storage_is_there
File "/home/runner/work/pyiron_workflow/pyiron_workflow/pyiron_workflow/storage.py", line 150, in _tinybase_storage_is_there
if os.path.isfile(self._tinybase_storage_file_path):
File "/home/runner/work/pyiron_workflow/pyiron_workflow/pyiron_workflow/storage.py", line 133, in _tinybase_storage_file_path
self.node.graph_root.working_directory.path
File "/home/runner/work/pyiron_workflow/pyiron_workflow/pyiron_workflow/node.py", line 788, in working_directory
self._working_directory = DirectoryObject(self.label)
File "/home/runner/work/pyiron_workflow/pyiron_workflow/pyiron_workflow/snippets/files.py", line 41, in __init__
self.create()
File "/home/runner/work/pyiron_workflow/pyiron_workflow/pyiron_workflow/snippets/files.py", line 44, in create
self.path.mkdir(parents=True, exist_ok=True)
File "/usr/share/miniconda3/envs/my-env/lib/python3.10/pathlib.py", line 1180, in mkdir
self.mkdir(mode, parents=False, exist_ok=exist_ok)
File "/usr/share/miniconda3/envs/my-env/lib/python3.10/pathlib.py", line 1175, in mkdir
self._accessor.mkdir(self, mode)
FileNotFoundError: [Errno 2] No such file or directory: 'start'
Ok, the tests were failing on my local machine as well. It was an issue with the NodeJob
stuff interacting with the storage
stuff and bad merges that I missed because the CI was out and I wasn't careful enough.
It looks like I'll also need to patch all the storage to run only on >=3.11 after all.
Coverage variation | Diff coverage |
---|---|
:white_check_mark: +1.31% (target: -1.00%) | :white_check_mark: 90.86% |
You may notice some variations in coverage metrics with the latest Coverage engine update. For more details, visit the documentation
@samwaseda, @JNmpi -- saving and loading is working! Needs this branch, and the
tinydata
branch on contrib.@pmrv, before I dig deep and start fleshing out tests and examples, I would love to get some high-level feedback from you about the architecture and how I'm (probably mis)using
tinybase
storage tools.Design
Uses
tinybase.storage.H5ioStorage
to save and load nodes (individual function nodes, macros, and workflows). Is written using this branch in contrib, so it is not surprising when tests fail!Node
(and some children) andDataChannel
define methods from storing their data in/restoring their data from anH5ioStorage
instance, although theDataChannel
The expectation is that loading happens from an already-live instance of a
Node
; that means that it has already instantiated its childDataChannel
objects and they are live too -- so we are really just populating attributes of live objects.Information about channel connections is stored at the
Composite
level, and these are recreated after all children are available.Macro
s instantiate their children at their own instantiation time, butWorkflow
s need to look at the stored data (package identifier and node class), re-register packages, and re-instantiate their child nodes. Then these nodes can be restored from HDF as usual.We delay interaction with
pyiron_contrib
until the last possible moment, as the import still takes forever.Features
save()
andload()
interface available at theNode
level -- works for function nodes, macros, and workflowsIt works like this:
Then we can come back later, in a fresh python interpreter (with the same
cwd
so we can find the save file), and do this:Note that at load-time we didn't need to register any packages -- everything we needed was found in the stored data and registered automatically.
Shortcomings
Composite
and children ofFunction
need to havemacro_creator
andnode_function
defined as class methods, not passed at instantiation, or I'm 99% sure saving/loading will break. This is not too bad, since all the nodes created by the decorators fulfill this automatically, and that's both our preferred and most common way of doing things.~ This is a non-issue; if you define a newNode
instance and save it with the same name, indeed, you'll get garbage -- IMO this is just a special case of changing the source code between saving and loading. If you set up the same node instance both times, it saves and loads just fine because it has the necessary function again anyhow.Workflow
's child nodes need to be created from a package (wf.create....
) -- this is so they get apackage_identifier
and the loading workflow can re-instantiate them. This is not too much of a pain right now since our package tracking is pretty sloppy -- a package is just a module -- so you just need to move nodes from your jupyter notebook over to a .py file somewhere in your python path and register them like usual prior to saving.Macro
instance in your notebook (e.g. by changing its internal connections, or replacing an internal node), you'll still be able to save it, but loading it will probably just breakTODO:
tinybase
from contrib~ Do it later, after contrib 949 is mergedStorable
(at leastNode
-- maybeChannelData
still gets a different solution) so we can use_re/store
directly instead of the bespoketo/from_storage
~ Do it later, we have an issue placeholder #162tinybase
tools used to be available on conda~ Git-copy it to this repo instead (and later, per above)Closes #153