Open ablaom opened 1 year ago
It should be possible to request logging for these actions:
MLJModelInterface.save(location, mach)
whenever location
is an mljflow experiment (instead of a path to file). evaluate(mach, ...)
or evaluate(model, data..., ...)
for any mach
/model
(including composite models, such as pipelines)MLJModelInterface.fit(TunedModel(model, ...), ...)
for any model
(and hence calling fit!
on an associated machine)MLJModelInterface.fit(IteratedModel(model, ...), ...)
for any model
(and hence calling fit!
on an associated machine)Moreover, it should be possible to arrange automatic logging, i.e., without explicitly requesting logging for each such action.
save(file, mach)
should be instead saved as an mlflow artifactmodel
)model
)measures
(aka metrics) appliedmeasurement
And, if possible:
CV
) and, if possible, it's parameters (e.g., nfolds
)repeats
(to signal possibility this is a Monte Carlo variation of resampling)For the optimal model:
For each model in the search (each hyperparameter-set):
For the final trained model (different from the last "evaluated" model, if retrain=true
;
see
here)
For the partially trained model at every "break point" in the iteration:
The same compulsory items in 2, plus a final training error, if available
Serialization of the corresponding "training machine" (see docs), as an artifact.
I'm less clear about details here, but here are some comments:
In tuning, each model evaluated should be a separate run within same experiment as the optimal model run
Iteration would be similarly structured
Since a model (hyperparameter set) can be nested (e.g, pipelines and wrappers), I
suggest that a flattened version of the full tree of parameters be computed for purposes
of logging, and suggestive composite names created for the nested
hyperparameters. Possibly, we may want to additionally log model
as a julia-serialized
artifact??
Some suggestions:
In serialization, one just replaces location in MLJBase.save(location, mach)
with the
(wrapped?) mljflow experiment.
In performance evaluation, we add a new kwarg logger=nothing
to evaluate
/evaluate!
which user can set to an (wrapped?) mljflow experiment.
Cases 3 and 4 are similar but logger=nothing
becomes new field of the wrapper
(TunedModel
or IteratedModel
structs).
Add global variable DEFAULT_LOGGER
accessed/set by user with new methods
logger()/logger(default_logger)
initialized to nothing
in __init__
, and change the
above defaults from logger=nothing
to logger=DEFAULT_LOGGER
.
We could eigher extra kwargs/fields to control level of verbosity, or if we are wrapping experiments anyway, include the verbosity level in the experiment wrapper. I'm leaning towards the latter (or just making everying compulsory).
A proof of concept already exists for performance evaluation. This shows how to add the new functionality using an extension module, which also forces us to keep the extension as disentangled from current functionality as much as possible, for better maintenance.
When a TunedModel
is fit
, it "essentially" calls evaluate!
on each model in the
tuning range, so we can get some functionality in that case by simply passing the
logger
parameter on. What actually happens is that fit
wraps the model as
Resampler(model, ...)
,
which has fields for each kwarg of evaluate
, this resampler gets wrapped as a
machine, trained, and then a special evaluate
method is called on this machine to get
the actual evaluation object. So we also need to add logger
to the Resampler
struct
(which is not public)
Some hints about how to flatten models appear here and here.
In IteratedModel
we already have the
Save
control
. Currently the default filename is "machine.jls", but if !isnothing(logger)
we could instead pass logger
as the default. Then, we change the default for
controls
to include Save()
if !isnothing(logger)
. I imagine something similar
could be worked out for WithEvaluationDo
and WithTrainingLossesDo
to get the other
information we want logged.
cc @pebeto @deyandyankov @tlienart @darenasc
Thanks for putting this together, @ablaom. A few notes and suggestions from my end.
Regarding logging=nothing
and 1. Serializing machines
, it might be useful to think of different types of arguments we might provide. As per MLFlowClient
's reference, we have 3 main types that we can use as parameters regarding logging.
MLFlowClient.MLFlow - this is a type which we use to define an mlflow client, it is usually instantiated such as mlf = MLFlow("http://localhost:5000")
.
Then when we create an experiment and a run, it looks like this:
# Create MLFlow instance
mlf = MLFlow("http://localhost:5000")
# Initiate new experiment
experiment_id = createexperiment(mlf; name="experiment name, default is a uuid")
# Create a run in the new experiment
exprun = createrun(mlf, experiment_id)
I'll start with the simplest case described in the original post here:
Serializing machines: Calling MLJModelInterface.save(location, mach) whenever location is an mljflow experiment (instead of a path to file).
location
could either be an MLFlow
, MLFlowExperiment
, or MLFlowRun
.
The most obious case is when we provide an MLFlowRun
. Runs belong to experiments and experiments belong to an mlflow instance. A single experiment may have 0 or more runs.
Thus, we could define:
MLJModelInterface.save(location::MLFlowRun, mach)
- save the machine as a serialized artifact in an existing run.MLJModelInterface.save(location::MLFlowExperiment, mach)
- create a new run in an existing experiment and fall back to MLJModelInterface.save(location::MLFlowRun, mach)
MLJModelInterface.save(location::MLFlow, mach)
- create a new experiment in the provided location::MLFlow
and fall back to MLJModelInterface.save(location::MLFlowExperiment, mach)
We can use similar logic when initiating logging from different places, such as performance evaluation, hyperparameter tuning, and controlled model iteration.
@deyandyankov I bundled a MLFlow
object inside a general MLFlowInstance
type that allows us to save the most important project configurations: base_uri
, experiment_name
and artifact_location
(we can expand them, it's just a draft). You can see more about that here. With that, there is no piece of code from MLFlowClient loaded at first glance. We need to import the library first to operate with the methods that logs our info; and if that's not the case, it's easy to send an error requesting things to user.
mlflow is an API and web-based UI for logging parameters, code versions, metrics, and output files when running machine learning experiments, for later visualizing the results. Integration of mlflow already exists for these other ML platforms: Scikit-learn, Keras, Gluon, XGBoost, LightGBM, Statsmodels, Spark, Fastai, Pytorch.
Further to this short project outline, and after preliminary discussions with @pebeto and @deyandyankov, I give below a tentative design proposal for integration of [mlflow](https://www.mlflow.org()) with MLJ, using MLFlowClient.jl, which already provides a Julia interface to mlflow.