Closed N-Wouda closed 1 year ago
Some random thoughts related to the question: how good is LNS compared to ALNS?
In Stützle and Ruiz (2018), two interesting conclusions are drawn in Chapter 4 where they perform numerical experiments with IG on the permutation flow shop problem:
As a conclusion from this study, the most significant factor is the local search and the NEH reconstruction. Most other factors have less importance.
There's also this paper about the A in ALNS: Turkes et al. 2021 (not sure if we've linked to this thing before). I read this as "it's probably not that beneficial in general, since it also adds complexity".
We now have SISR as part of the CVRP example. We can add another example doing LNS with $\alpha$-UCB for a job shop problem later on: that ticks the IG box, and shows we're not just a one-trick-ALNS-pony.
Another good direction might be to offer more diagnostics. Can we, for example, help users somehow with tuning parameters/providing tools to efficiently tune an ALNS instance?
There's 3 "parameter groups" that we might want to tune in ALNS:
It would be nice to have a tune
module that does some of the following:
tune.alns
should return the $n$ configurations of ALNS with a sampled combination of those destroy/repair operators.tune.accept
should return $n$ sampled configurations/instances of RRT.A simple workflow for tuning the acceptance criteria would look as follows:
alns = make_alns(...)
init = ...
select = ...
stop = ...
data = []
for idx, accept in tune.accept(RecordToRecordTravel, parameter_space, sampling_method):
res = alns.iterate(init, select, accept, stop)
data[idx] = res.best_state.objective()
# Best configuration
print(np.argmin(data))
This could be extended to tuning ALNS and operator selection schemes as well. I don't have much experience tuning so I don't know exactly how the tuning interface should look like.
We probably shouldn't invent our own half-baked solution for this. The ML community has a lot of this already, with e.g. keras-tuner
, ray.tune
, etc. Those are used by a lot of people, apparently with some success. At some later point it could pay off to see how they work, and whether we can do something similar in terms of interface for our code.
I'm closing this issue because tuning is now in #109, and the other ideas from last summer have (for the most part) already been implemented.
[partially based on https://doi.org/10.1016/j.cor.2022.105903, thanks @leonlan]