Closed ricohageman closed 1 year ago
Hi @ricohageman, thanks for opening an issue!
For a project the maximum duration for the optimization is known in advance and fixed. However, the duration required per iteration depends on certain destroy operator settings and the problem size. Therefore the number of iterations is unknown at the start and we cannot use the
autofit
method of theSimulatedAnnealing
class.
This is something I have thought about for a bit, but unfortunately do not have a good solution for, other than the one you have already described. Since the number of iterations is not fixed, there is no clear way to know beforehand how many temperature updates SA will see. The current autofit()
method is really tied to a maximum number of iterations, I am afraid.
Are there another approach that could work
One alternative that I think makes sense is reheating: after a fixed number of iterations, simply increase the SA temperature to a higher value, and then continue until the runtime budget is exhausted, or the fixed number of iterations is again exhausted. A custom StoppingCriterion
considering both a maximum runtime and number of iterations can account for this quite elegantly.
would something like this be useful to add to this package
I am not sure. IIUC, what you do is somewhat similar to the following, splitting the search into a short test/estimation phase, and a later phase that uses the remaining time budget:
from alns.accept import SimulatedAnnealing, HillClimbing
from alns.stop import MaxIterations, MaxRuntime
from alns import ALNS
time_budget = 60 # seconds, say
test_iters = 20
# First few iterations are used to get an estimate of the average runtime
# per iteration.
accept = HillClimbing()
stop = MaxIterations(test_iters)
alns = ALNS()
res = alns.iterate(..., accept=accept, stop=stop)
avg_iter_time = res.statistics.total_runtime / max_iters
remaining_time = time_budget - res.statistics.total_runtime
remaining_iters = remaining_time // avg_iter_time
# Now that we have an estimate for the remaining number of iterations
# that fit in the time budget, we can estimate an SA criterion and
# perform the actual ALNS loop.
accept = SimulatedAnnealing.autofit(..., remaining_iters)
stop = MaxRuntime(remaining_time)
alns = ALNS()
res = alns.iterate(res.best_state, ..., accept=accept, stop=stop)
We could add a description of this approach to the documentation (perhaps in the API description of SimulatedAnnealing.autofit
).
I think your problem can also be solved by using a time-based SA. Instead of using iterations, you could use the elapsed time w.r.t. max runtime as percentage to compute the current temperature.
Here's a custom implementation of this which hopefully covers your use case.
import time
import numpy as np
from alns.accept.AcceptanceCriterion import AcceptanceCriterion
class TimeBasedSimulatedAnnealing(AcceptanceCriterion):
def __init__(
self,
start_temperature: float,
end_temperature: float,
max_runtime: float,
method: str = "exponential",
):
self.start_temperature = start_temperature
self.end_temperature = end_temperature
self.max_runtime = max_runtime
self.method = method
self.start_time = time.perf_counter()
def __call__(self, rnd, best, current, cand):
# pct_time=0 should give the start temperature,
# pct_time=1 should give the end temperature.
pct_time = (time.perf_counter() - self.start_time) / self.max_runtime
if self.method == "linear":
delta = self.start_temperature - self.end_temperature
temperature = self.start_temperature - delta * pct_time
elif self.method == "exponential":
delta = self.end_temperature / self.start_temperature
temperature = self.start_temperature * delta**pct_time
else:
raise ValueError(f"Method {self.method} not known.")
probability = np.exp((current.objective() - cand.objective()) / temperature)
return probability >= rnd.random()
@classmethod
def autofit(
cls,
init_obj: float,
worse: float,
accept_prob: float,
max_runtime: int,
method: str = "exponential",
):
# There's actually no more need for this classmethod, but I made one anyway.
start_temp = -worse * init_obj / np.log(accept_prob)
return cls(start_temp, 1, max_runtime, method)
I think it would be nice to provide time-based SA and RRT criteria somewhere in the future. We need to think about how it affects the current SA and RRT implementations. But until then you can always use customized acceptance criteria :-)
Thanks for the replies @N-Wouda and @leonlan!
We currently have an implementation that is similar to the one provided by @N-Wouda though implemented into a single AcceptanceCriterion
. The idea of a time-based SA is interesting and I'll try that out for sure!
Provided that there is currently no desire of implementing this in the library I think that it's safe to close this issue?
Sure! If you have any further questions/suggestions, feel free to open another issue @ricohageman!
For a project the maximum duration for the optimization is known in advance and fixed. However, the duration required per iteration depends on certain destroy operator settings and the problem size. Therefore the number of iterations is unknown at the start and we cannot use the
autofit
method of theSimulatedAnnealing
class.Currently an estimate for the iterations is used in the
autofit
method and after a couple of iterations the step size is updated based on the elapsed time. This enables us to adapt the acceptance criteria to accommodate the full available search duration.Are there another approach that could work and/or would something like this be useful to add to this package?