Closed alyst closed 8 years ago
Do you mean multi-objective, i.e. multiple goals that needs to be traded off, typically with a Pareto Front / Hypervolume approach? If so the answer is definitely yes, we use that in our research and have planned to include it properly in BBO at some point. I consider the state-of-the-art to be the Borg MOE of Hadka et al and have planned to prioritize/focus the design around that, see for example:
Hadka, David, and Patrick Reed. "Borg: An auto-adaptive many-objective evolutionary computing framework." Evolutionary Computation 21.2 (2013): 231-259. https://scholar.google.se/scholar?hl=sv&q=Borg%3A+An+Auto-Adaptive+Many-Objective+Evolutionary+Computing+Framework&btnG=
Currently I do not have a lot of time for this though so if you are in a hurry maybe you can consider starting on the implementation? Or we can set up a way to collab on it?
And no, I think we should not constrain to 2 objectives. Many of the recent many-objective algorithms of Hadka et al, Deb et al, Yao et al and others can scale from 2 objectives up to 15-30 of them quite gracefully. The older NSGA-II is known to only work well for quite few objectives so I would rather not start there. IMHO, the Borg approach is one of the most elegant and effective and can dynamically trade off between different operators as the search needs them.
Some of Deb et al's recent work even shows that there are many-objective algorithms that scale down to even work well in single-objective settings.
Yes, I've meant multi-objective optimization. Thank you for the references! At the moment I don't have resources to start implementing it myself either, I just wanted to better understand how big and relevant would be the effort. Of course, if somebody would start implementing MOO, I will be very much interested in contributing/testing etc.
Ok, I plan to start this work some time in March-April 2016. If you have time to wait for that it would be great if you can then help out, test, and refine it. Thanks.
That sounds great!
Currently I'm considering whether multicriterion optimization (MO) can improve the convergence for my problems (Bayesian probability-like functions, i.e. prior probability is one criterion, likelihood is another).
Would it make sense to extend
BlackBoxOptim
to multicriterion problems? If yes, what would be the best strategy of implementing MO? What are the most efficient evolutionary methods for MO? Should MO be constrained to 2-criteria case (as then the Pareto frontier is easier to manipulate)?