Closed leonlan closed 2 years ago
I find something similar already on quali (same benchmark; different tab "Static" with runs of 6 October), where keeping the best solution does not seem to improve the solution quality. It's probably too soon to tell whether it makes it (much) worse, but there's no clear benefit either.
I ran some more experiments over the weekend to analyze performance. Instead of keeping best, I kept 0, 1 or 15 binary-tournament picked individuals and then added minPopSize random individuals on restart. These are the results:
nbIter/nbKeepOnRestart | 0 | 1 | 15 |
---|---|---|---|
5000 | 164337 | 164344 | 164350 |
7500 | 164344 | 164349 | 164358 |
10000 | 164345 | 164372 | 164379 |
12500 | 164347 | 164367 | 164368 |
Conclusion: just keep nothing, that should work best.
I also tried to mutate the best individual using some slack-induction string removal operators. But no success with that either.
I'm really surprised and confused why keeping parts of the old population leads to no improvement. I'm really confused why nothing works 🤷♂️. Maybe I just didn't hit the sweet spot.
So keeping more of the old population is consistently worse (from 0 -> 1 -> 15 is nowhere decreasing). Larger values for nbIter
seem to be worse as well, but the evidence isn't as clear cut. We should definitely investigate restarting fairly often when tuning (e.g., nbIter
should probably be in the range of [1K, 10K]
or so).
[1k, 10k]
seems like a good range to me.
We may also want to change our how nbIterNoImprovement
is updated. Right now, it's looking at the current best feasible solution. If that's not improved, the counter adds one. The original implementation looks at whether the global best solution is improved. I don't know if the latter is better, but if we do consider that one, we need to have a much larger range, something like [5k, 20k]
. Otherwise all restarted populations keep restarting without even getting close to a good solution.
The original implementation looks at whether the global best solution is improved. I don't know if the latter is better, but if we do consider that one, we need to have a much larger range, something like [5k, 20k].
I feel like that original implementation is fairly flawed, and probably not better than what we have now. Like, we observe that after a few thousand iterations the thing basically gets stuck, and never really gets close to a best solution from earlier restarts in that current run - it needs a restart and another go before possibly improving the objective again.
It's probably just better to quit after a little while without improvement, so as to explore more.
I think you are right. Thinking back again on why I changed it, there are a lot of instances where the algorithm only gets stuck after like 30K iterations. A restart on global improvement would then need at least be 30K, but that's way too much for other instances.
The current implementation is much more robust to this.
I'll close this issue because there's not enough time to try something else and nbKeepOnRestart=0
seems to work fine. Perhaps parameter tuning will discover a better value but who knows.
Benchmark results show that keeping the best solution is not good for final phase performance. By setting
nbKeepBest=0
, we gain 100 more points on the final phase when compared to the quali phase. There is no gain at all if we setnbKeepBest=1
.I have an idea to mutate
best
before putting it into the new population. This will be done after finishing #96. I don't know if it will work, but we'll see!