fslaborg / FSharp.Stats

statistical testing, linear algebra, machine learning, fitting and signal processing in F#
https://fslab.org/FSharp.Stats/
Other
205 stars 54 forks source link

[Feature Request] Documentation of Nelder-Mead method #260

Open bvenn opened 1 year ago

bvenn commented 1 year ago

Problem

When Nelder-Mead method is applied to a simple quadratic polynomial, the minimization is unable to identify the minimum. It is quite close, but the modification of StopCriterion or the NmConfig isn't trivial without further documentation.

Solution

Description of the fields in the documentation.

Steps to reproduce

#r "nuget: Plotly.NET"
#r "nuget: FSharp.Stats, 0.4.12-preview.2"

open FSharp.Stats
open FSharp.Stats.Optimization
open System

open Plotly.NET

let myFunction (xs: vector) = 
    let x = xs.[0]
    x**2. - 0.32*x - 0.13

// initial guess for the optimization
let x0 = vector [| -0.3 |]

// default solver options
let nmc = NelderMead.NmConfig.defaultInit()   

// optimization procedure
let optim = 
    //let stopCrit = 
    //    { OptimizationStop.defaultStopCriteria with MinFunctionEpsilon = 1e-24 }
    //NelderMead.minimizeWithStopCriteria nmc x0 myFunction stopCrit
    NelderMead.minimize nmc x0 myFunction

(*
optim.Vectors just contains 3 valid vectors
*)

let validVectors = 3

// optimization results as x, y, and z coordinate
let xs,ys =
    optim.Vectors.[0..validVectors - 1] |> Array.map (fun x -> x.[0],myFunction x)
    |> Array.unzip

let optimizationPathchart = 
    [
    [-1.  .. 0.005 .. 1.] |> List.map (fun x -> x,myFunction (vector [x])) |> Chart.Line 
    Chart.Line(x=xs,y=ys,ShowMarkers=true,Name="Optimization path")
    Chart.Point([optim.SolutionVector.[0],optim.Solution],Name="Solution")
    ]
    |> Chart.combine    
    |> Chart.withTemplate ChartTemplates.lightMirrored
    |> Chart.withXAxisStyle ("x",ShowGrid=false) 
    |> Chart.withYAxisStyle ("myFunction(x)",ShowGrid=false) 
    |> Chart.withSize (800.,800.)

Chart.show optimizationPathchart

image

bvenn commented 1 year ago

The problem at the current state is the negative part of the objective function. If a step during the optimization hits a negative value, the procedure is stopped immediately. A fix for this is in preparation