Open dietmarwo opened 2 years ago
Great work. It is easy to integrate new algorithms and the graphical output is awesome.
I have only 4 minor issues:
1) https://numba.pydata.org/ is easy to use and can speed up GA up to factor 100 without significant code changes.
2) Only weak algorithms are provided. Very nice for pedagogical purposes, but a state-of-the-art algorithm which is challenging to beat is missing.
3) multiprocessing.Pool creates daemonic processes. This prevents experiments with multi-threaded algorithms.
4) A multi-objective problem variant - together with the corresponding optimizer(s) is missing.
I created a fork https://github.com/dietmarwo/Multi-UAV-Task-Assignment-Benchmark fixing all these issues. I can create pull requests if you are interested in some of these fixes. See also https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/UAV.adoc .
We should make sure that a future comparison with reinforcement learning is fair: Machine learning uses many GPU cores, so we should utilize parallelization also when applying optimization.
Nice work! I will add your repo to readme so that others can see your update.
Added
Great work. It is easy to integrate new algorithms and the graphical output is awesome.
I have only 4 minor issues:
1) https://numba.pydata.org/ is easy to use and can speed up GA up to factor 100 without significant code changes.
2) Only weak algorithms are provided. Very nice for pedagogical purposes, but a state-of-the-art algorithm which is challenging to beat is missing.
3) multiprocessing.Pool creates daemonic processes. This prevents experiments with multi-threaded algorithms.
4) A multi-objective problem variant - together with the corresponding optimizer(s) is missing.
I created a fork https://github.com/dietmarwo/Multi-UAV-Task-Assignment-Benchmark fixing all these issues. I can create pull requests if you are interested in some of these fixes. See also https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/UAV.adoc .
We should make sure that a future comparison with reinforcement learning is fair: Machine learning uses many GPU cores, so we should utilize parallelization also when applying optimization.