Open williamjameshandley opened 5 years ago
To get the ball rolling, here are a few initial suggestions.
Goldstein-Price function: π(π₯1, π₯2) = [1 + (π₯1 + π₯2 + 1)^2(19 β 14π₯1 + 3π₯1^2 β 14π₯2 + 6π₯1π₯2 + 3π₯2^2)] Γ [30 + (2π₯1 β 3π₯2)^2(18 β 32π₯1 + 12π₯1^2 + 48π₯2 β 36π₯1π₯2 + 27π₯2^2)] The Goldstein-Price function is a two-dimensional function designed to test the convergence rate of global optimisers since it is very flat near the global minimum. It is not a test function with many local minima/maxima and it's only 2 dimensional. (The typical domain of use is π₯π β [β2, 2], π = 1,2, and it has a global minimum of π(0, β1) = 3.)
SpikeGrid function: π(π₯π) = 1 β |prod sin(ππ₯π/2ππ)|^π where the product goes for π = 1, β¦ , π SpikeGrid function consists of a grid of βspikesβ with π controlling the the βsharpnessβ of the spikes. It is similar to the egg-holder function but nastier since the spikes can be made arbitrarily sharp. It's disadvantage is that it's periodic, so it might be 'easy' to learn.
Schwefel function: π(π₯π) = 418.9829π ββ π₯π sin(β|π₯π|) where the sum goes for π = 1, β¦ , π The Schwefel function is highly multi-modal, similar to the egg-holder function but better for testing since the depth and height of the minima and maxima varies with location. Although, it's still periodic. (The typical domain of use is π₯π β [β500, 500], π = 1, β¦ , π, and it has a global minimum of π(420.9687, β¦ , 420.9687) = 0.)
These are all good ideas @cbalazs . Other examples might include the ones in the MultiNest paper (e.g. rastrigin, rosenbrock, gaussian shell and himmelblau).
I think the best way to start on this would be to create a module pybambi/example_likelihoods.py
, and define these functions in there. There should also be a method for generating samples from these distributions using MultiNest and PolyChord (interfaces are provided in the pybambi.MultiNest
and pybambi.PolyChord
modules), using the outputs of either the _equal_weights.txt
or .txt
files. You'll need a corresponding test files in tests/test_example_likelihoods.py
to confirm that the functions produce the correct values for a small set of example inputs.
Once you've done that on a local fork, you should create a pull request where we can discuss any further changes to the code. Reference this issue with the tag #15
This can be done in parallel to #4 , in order to try to find a good neural network architecture @melli1992