facebookresearch / nevergrad

A Python toolbox for performing gradient-free optimization
https://facebookresearch.github.io/nevergrad/
MIT License
3.92k stars 352 forks source link

Add multiobjective benchmarks #1059

Open jrapin opened 3 years ago

jrapin commented 3 years ago

The current multiobjective benchmarks are based on ArtificialFunction which is initially mono-objective. We should use more "native" multiobjective benchmarks, such as the ones presented on the wikipedia page: https://en.wikipedia.org/wiki/Test_functions_for_optimization#Test_functions_for_multi-objective_optimization

afozk95 commented 3 years ago

I can work on this issue if that is okay. It will be my first issue in nevergrad.

I checked the code and my understanding is functions in the wikipedia page should be implemented in corefuncs.py, similar to this. Is that right? In that case, what is the best way to enforce constraints and search domain of those functions?

jrapin commented 3 years ago

Hi and welcome ;)

corefuncs for now only contains very generic mono-objective and unconstrainned (as you noticed) functions for use with a particular experiment class ArtificialFunction. This class is messy and contains too many things that would not be compatible with the contrained multiobjective case here. So I would rather keep this separated.

Let's create a subpackage in nevergrad/functions (multiobj?) and keep everything separated there (including tests). Worst case we'll merge back afterwards when things get settled, but I am not sure it's possible.

I am not sure how to best structure this. As an early notice, I tend to avoid repeated/copy-pasted code because it's hell to maintain. In this very case... I am not sure how to best avoid repetitions. I am wondering if we should define each function separately, then define the corresponding parametrization in an ExperimentFunction, but that means the definition is split in two parts. Another option would be to create one callable class for each test case, with the parametrization included, and then call the right one in an ExperimentFunction initialized with the name of the test case.

afozk95 commented 3 years ago

Hi @jrapin :)

I am not sure if what I did is exactly what you meant, but I tried to replicate the second option you mentioned. For now, I implemented 3 test cases from wikipedia page linked above as separate callable classes. Please see my implementation in #1135. I also tried to optimize those 3 cases using DE and replicated the pareto front plot successfully.

kmira1498 commented 2 years ago

Hi @jrapin, my team (@hannah-hole, @SampurnaGera, @batokio, @aakash5chawla, @kmira1498) and I would like to work on this issue as our first issue.