Open accosmin opened 8 years ago
This paper looks very promising: https://arxiv.org/pdf/1503.03712.pdf This paper is also interesting (they parametrize the activation function to vary from almost linear to non-linear): https://arxiv.org/pdf/1702.00758.pdf
Implement some basic continuation methods for global optimization. Use various smoothing functions: h(x) = (1 - lambda) * f(x) + lambda * g(x), where g(x) can be:
Need to add unit tests and update the benchmark programs.