Closed ablaom closed 4 years ago
cc @azev77
I'm not going to straight up say it is definitely ok for us (because I didn't do anything yet more than a cursory look), however, should be ok. Most of our implementation is in models!
only, we would have to redo result
(extras
) but its such a puny method I can't really see it being a massive deal provided all of the same information is still there.
As I said, an early "quick glance" conclusion based on what you've written here.
Thanks for the feedback @iqml . As I say, I'm happy to check this out myself and make the necessary PR. Working on this today.
Okay, it's possible to implement the new API at TreeParzen. This branch passes tests in conjunction with the current PR on my machine under Julia 1.4.
Below is a sketch of the necessary changes to TreeParzen.jl
@yalwan-iqvia I can give more details in the final PR if you are happy with the general direction.
results
can be dumped. There is no need to implement extras
because I am not including anything extra in the history that is intended for the user to inspect
instead of "completing" trial objects in results
I added a trial history to state
; the models
method now begins by updating this trial history as follows:
history
, which includes a performance estimate measurement
and a trial objectmeasurement
using tell!
(not sure if a deep copy is really required - but this was the status quo)state
The rest of models
is the same as before, except it now needs to include the updated state
in its return value because models!
is now models
.
Previously there was not separate trial history in state, but one need to create one every call to models
using the MLJ history
. It seems to me the new approach is equivalent to the old approach at worst and slightly more efficient at best, but I have not benchmarked this.
A difference to note is that the trial history is no-longer exposed to the user in the reported history. If desired, this could be remedied either by implemented extras
(or by overloading report_history
) but I'm not sure this is needed, as the trial objects are implentation-specific, yes?. The user still has access to the the MLJ representation of the model and corresponding performance estimates (and indeed for all specified measures - not just the first one, as currently implemented).
@yalwan-iqvia If you are happy with this I will merge the current PR, and later make a formal PR to TreeParzen.
@ablaom I've opened what you did as a PR agains treeparzen, a few comments there but nothing to do with the direction of the work. This looks fine to me.
Context: https://github.com/alan-turing-institute/MLJ.jl/issues/487 #40 Replaces: https://github.com/alan-turing-institute/MLJTuning.jl/pull/74
Currently the implementer of a new strategy implements a
best
method for deciding on how to extract the history entry corresponding to the "best" (optimal) model. All the current strategies apply the "naive" heuristic which simply miminizes (or maximises) the user-specifiedmeasure
evaluations, aggregated over all samples (folds) in resampling.This PR adds an interface point for the user to specify the "selection heuristic", with the tuning strategy surrendering that responsibility. The idea is that most "selection heuristics" would apply generically (ie, to any tuning strategy) but the PR leaves open the possibility that some future heuristics might be strategy-specific.
Formally, a selection heuristic is a subtype of a new abstract type
SelectionHeuristic
and a new concrete subtype implements abest
method (and a traitsupports_heuristic
to indicate if it is generic or strategy-specific). An moderately sophisticated user could add their own custom heuristics; see here.Only a single selection heuristic
NaiveSelection
is introduced in the current PR.This PR will break the externally implemented strategy
TreeParzen
. I will be happy to make the necessary PR to TreeParzen.jl to fix this. The built-in strategiesGrid
andRandomSearch
are fixed in this PR.edit The
models!
method is replaced with non-mutatingmodels
method to make the whole interface "functional" (state
objects can now be immutable, andmodels
addsnew_state
to its return value).New for users
Adds keyword
selection_heuristic=...
to constructor ofTunedModel
instances. Default isNaiveSelection()
, which gives the existing behaviour (with an option to specifyweights
for multiple measures).Breaking for users
This PR also tweaks the reporting for
TunedModel
instances. Ifmach
is a machine trained on aTunedModel
instance, andr = report(mach)
then:r.best_result
no longer exists (replaced withr.best_history_entry
)r.history
is now a vector of named tuples (true also for the internal representation of the history)r.history
excludes implementation-specific "model metadata" (necessarily included in the internal history)Main changes for the tuning strategy API
best
method to implementresult
method is renamedextras
method with a simplified role (no need to handle "model metadata", which is automatically include in the history under the hood)tuning_report
has a simplified role (no need to generate the user form of history)models!
is replaced by a non-mutatingmodels
method with identical signature butnewstate
added to return value (now a tuple,(vector_of_models, new_state)
) instead ofvector_of_models
).For details refer to the revised README.md
To do before merging
cc @yalwan-iqvia