Open ricardoV94 opened 10 months ago
When would that be useful?
When rerunning notebooks or any workflow with saving/loading of traces where you might still be tinkering with the model.
You don't need to bother defining the names of the traces, or overriding old traces, since caching is automatically derived from the model and its data
I usually rely on things like MLFlow for storing artifacts like this.
I'm not familiar with MLflow, the idea here is that it pairs the saved traces to the exact model/sampling function (and arguments) that were used.
Basically the model and the function kwargs are the cache key.
Does this have any parallel to your workflow with MLflow?