DAGWorks-Inc / hamilton

Hamilton helps data scientists and engineers define testable, modular, self-documenting dataflows, that encode lineage/tracing and metadata. Runs and scales everywhere python does.
https://hamilton.dagworks.io/en/latest/
BSD 3-Clause Clear License
1.71k stars 107 forks source link

Better support for caching / checkpointing development workflow - umbrella issue #940

Open skrawcz opened 3 months ago

skrawcz commented 3 months ago

Is your feature request related to a problem? Please describe. We need a simpler caching & checkpointing story that has full service visibility into what's going on.

Describe the solution you'd like

  1. Checkpointing -- i.e. cache outputs and restart from the latest point.
  2. Intelligent Caching -- i.e. cache nodes and only rerun things if code or data has changed.

These should come with the ability to:

  1. visualize what is going on when using them.
  2. work in a notebook / cli / library context
  3. extend how to hash data & where to store it

Prior art

  1. You do it yourself outside of Hamilton and use the overrides argument in .execute/.materialize(..., overrides={...}) to inject pre-computed values into the graph. That is, you run your code, save the things you want, and then you load them and inject them using overrides=. TODO: show example.
  2. You use the data savers & data loaders. This is similar to the above, but instead you use the Data Savers & Data Loaders (i.e. materializers) to save & then load and inject data in. TODO: show example.
  3. You use the CachingGraphAdapter, which requires you to tag functions to cache along with the serialization format.
  4. You use the DiskCacheAdapter, which uses the diskcache library to store the results on disk.

Could use https://books.ropensci.org/targets/walkthrough.html#change-code as inspiration.

Additional context Slack threads:

Next steps:

TODO: write up tasks in this issue into smaller and manageable chunks.

jmbuhr commented 3 months ago

I took one of the targets examples and transferred it one-to-one to hamilton to see how the concepts compare. Both workflows are implemented in modules and make use of helper functions from a separate module. Then both are started interactively from quarto documents and their results and graphs visualized in the rendered output of said notebooks: http://jmbuhr.de/targets-hamilton-comparison/ (source code: https://github.com/jmbuhr/targets-hamilton-comparison)

Again, this is for exploration of possibilities, not to impose paradigms on you :)

In this first pass I noticed two things I was missing in hamilton compared to targets when it comes to caching:

jmbuhr commented 3 months ago

For inspiration, the developer documentation of how targets does caching might come in handy: https://books.ropensci.org/targets-design/data.html#skipping-up-to-date-targets

skrawcz commented 1 month ago

Update - we've got a candidate API:

c = CacheStore() # this could house various strategies, e.g. basic checkpointing, to more sophisticated fingerprinting.
dr = driver.Builder()...with_cache(c, **kwargs).build()

# first run -- nothing cached
dr.execute([output1], inputs=A)

# change some code -- any code: upstream or downstream of what was run before
# rebuild driver
dr = driver.Builder()...with_cache(c, **kwargs).build()
# this should recompute as needed -- and recompute downstream as needed
dr.execute([output2], inputs=A)  

# no-op if run again
dr.execute([output2], inputs=A)  

# should only recompute what inputs impact -- going downstream as needed.
dr.execute([output2], inputs=A')

Then there's some nuance around:

skrawcz commented 2 days ago

Updates:

.with_cache() will just be about fingerprinting caching strategy. .with_checkpointing() will just be about checkpointing.