I know you’ve already done loads of videos about optimization problems and algorithms to solve them, but I still suggest doing another one. Namely, about the simulated annealing algorithm. It’s a great tool as it is specifically designed to not get stuck in local minima. I guess, it’s the easiest to implement one that does that.
The algorithm is based on gradient descent. Layout:
1) Generate a random solution
2) Generate a neighbor of current solution (somehow, depends on a problem)
3) If the neighbor is better than current solution make it a new solution.
4) If it’s worse, make it a new solution with a probability, that is a function of difference in fitnesses of a neighbor and current solution and of a “temperature”, that decreases with each iteration. (The less the temperature, the less the probability)
This helps to “unstuck” yourself, when you’re in a local optimum.
5) Go to step 2
6) When the temperature is 0, the algorithm behaves like a gradient descent. And obviously gets stuck. When this happens, stop. Hopefully in a global optimum.
Here’s a Wikipedia page: https://en.m.wikipedia.org/wiki/Simulated_annealing
I’ve already tried this algorithm on TSP, but as you’ve already made a ton of videos on it, I suggest the Knapsack problem. Here’s the code for TSP, by the way: https://editor.p5js.org/BIGfoot/sketches/6hot9Ssgm
(My code works fine, but the amount of kludges is huge, so I don’t recommend using it as a starting point.)
I know you’ve already done loads of videos about optimization problems and algorithms to solve them, but I still suggest doing another one. Namely, about the simulated annealing algorithm. It’s a great tool as it is specifically designed to not get stuck in local minima. I guess, it’s the easiest to implement one that does that. The algorithm is based on gradient descent. Layout: 1) Generate a random solution 2) Generate a neighbor of current solution (somehow, depends on a problem) 3) If the neighbor is better than current solution make it a new solution. 4) If it’s worse, make it a new solution with a probability, that is a function of difference in fitnesses of a neighbor and current solution and of a “temperature”, that decreases with each iteration. (The less the temperature, the less the probability) This helps to “unstuck” yourself, when you’re in a local optimum. 5) Go to step 2 6) When the temperature is 0, the algorithm behaves like a gradient descent. And obviously gets stuck. When this happens, stop. Hopefully in a global optimum. Here’s a Wikipedia page: https://en.m.wikipedia.org/wiki/Simulated_annealing I’ve already tried this algorithm on TSP, but as you’ve already made a ton of videos on it, I suggest the Knapsack problem. Here’s the code for TSP, by the way: https://editor.p5js.org/BIGfoot/sketches/6hot9Ssgm (My code works fine, but the amount of kludges is huge, so I don’t recommend using it as a starting point.)