NRottmann / Toolbox-GP-GMRF

4 stars 2 forks source link

Understanding the Methods #1

Open bonanza123 opened 6 years ago

bonanza123 commented 6 years ago

Thanks a lot for this toolbox!

I have some issues understanding the methods implemented and understanding why they are performing so differently.

My understanding is as follows:

method 0 does full Bayesian Optimization using the log likelihood function as given in (2.11) from the documentation but for K=C. This is combined with a heuristic which varies the initial solution (regarding generating the points and the kernel parameters). In my view this is this optimal Bayesian solution (assuming that we can find the global optimum). Is that correct?

method 1 is termed "bayesian optimization" but its not really clear to me how/why it works. Where does the objective f = - mu - alpha * Sigma (given in BayOpt_objFun.m) come from? Why are there always new x_bay y_bay values added in each iteration?

Just testing with the example provided in the toolbox, it looks like method 1 is performing much better than method 0 (although I assume method 0 to be optimal). Why is that?

Thanks a lot for your help.

NRottmann commented 6 years ago

Hi, The different methods just say which algorithm is used in order to find the minimum of equation (2.11), thus all methods try to find the best solution for log likelihood. Here fminunc from Matlab works best, but may get stocked in a local minima. Therefore other algorithms, such as genetic algorithms have been implemented too.

I hope I could help you, Best, Nils