JuliaNLSolvers / LsqFit.jl

Simple curve fitting in Julia
Other
313 stars 78 forks source link

Result comparison with CMPFit #105

Open gcalderone opened 5 years ago

gcalderone commented 5 years ago

I made some comparison with the CMPFit package, a wrapper to a C library implementing the MINPACK minimization algorithm. The results of the CMPFit library are particularly important, at least in my field, since it is widely used and it became almost a standard. Of course I would prefer to use a pure Julia solution (like LsqFit.jl), hence I decided to make these tests. However a few issues arised...

The whole code to reproduce the issues is in a single function named compare, reported at the end of this message. The purpose of the function is to compare the LsqFit and CMPFit results. The tasks performed are:

  1. it implements a model to be fitted, a simple Gaussian function plus an offset (4 parameters);
  2. it generates mock data of the model, by adding random noise;
  3. it fit the data against the model, using both LsqFit and CMPFit;
  4. it repeats steps 2 and 3 several times (keyword nloop, default: 10,000). The seed of the random number generator is fixed before the first iteration, hence the whole sequence is always the same;
  5. for the first iteration it prints the following informations:
    • the number of times the model function has been evaluated;
    • the reduced chi-squared;
    • the best fit parameter values and uncertainties;
  6. for all the iterations and for all the parameters the quantity N = (best_fit_value - true_value) / fit_uncertainty is stored;
  7. after nloop iterations it prints for all parameters the average of N (supposed to be 0) and its standard deviation (supposed to be 1), regardless of the initial parameter values.

I made two tests, changing the way uncertainties in the data are taken into account.

The results of the first test (Case A) are:

julia> compare()
First iteration:
 CMPFit |      Count:         23     Red. χ^2:   1.072366
 LsqFit |      Count:        102     Red. χ^2:   1.072366
 CMPFit |   Best fit:  2.027822   2.944700   3.911829   4.968451
 LsqFit |   Best fit:  2.027822   2.944700   3.911829   4.968451
 CMPFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512
 LsqFit |    Uncert.:  0.000252   0.000293   0.000443   0.000725

N = (best_fit_value - true_value) / fit_uncertainty
 CMPFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 LsqFit |    mean(N):  0.475844   2.170798  -0.100276  -2.239008   (should be ~0)
 CMPFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)
 LsqFit |     std(N): 99.762665  98.581601 100.674642 100.450846   (should be ~1)

Comments:

In order to solve Issue 2 I tried to incorporate the data uncertainties into the model, i.e. instead of fitting the model against the data with given weights, I fit the normalized residuals = (data - model) / uncertainties against an array of zeros, each with weight 1 (Case B).

To run the Case B test simply add the resid=true keyword:

julia> compare(resid=true)
First iteration:
 CMPFit |      Count:         23     Red. χ^2:   1.072366
 LsqFit |      Count:        102     Red. χ^2:   1.072366
 CMPFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 LsqFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 CMPFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 
 LsqFit |    Uncert.:  0.026092   0.030382   0.045844   0.075090 

N = (best_fit_value - true_value) / fit_uncertainty
 CMPFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 LsqFit |    mean(N):  0.004726   0.022502  -0.002308  -0.022491   (should be ~0)
 CMPFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)
 LsqFit |     std(N):  1.006822   0.995180   1.016948   1.015014   (should be ~1)

Comments:

In summary: 4 issues have arised in the comparison. Issue 1 is surely solvable (CMPFit does it...) and it will provide significant boost in performances. Issue 2, 3 and 4 are possibly different aspects of the same problem, which may even be due to a misunderstanding on the term weights to be passed as 3rd argument to curve_fit. However, the fact that the results change between Case A and Case B (Issue 3) seems weird to me....

To repeat the above tests simply install the CMPFit package and run the following code:

using CMPFit, LsqFit, Random, Printf, Statistics

function compare(;nloop=10000, resid=false)
    # Variable to count function evaluations
    count = 0
    ndata = 100
    x = Vector{Float64}(undef, ndata)
    y = Vector{Float64}(undef, ndata)
    e = Vector{Float64}(undef, ndata)

    function func(x, p)
        count += 1
        return p[1] .+ p[2] * exp.(-0.5 .* ((x .- p[3]) ./ p[4]).^2)
    end
    residuals(x, p) = ((func(x, p) .- y) ./ e)

    # "True" parameter values
    params = [2.0, 3.0, 4.0, 5.0]

    # Accumulate data on accuracies in parameter estimation
    N_cmpfit = Vector{Float64}()
    N_lsqfit = Vector{Float64}()

    rng = MersenneTwister(0) # Initialize random generator
    for ii in 1:nloop
        # Generate data
        x = range(-3., stop=3., length=ndata) .* params[4] .+ params[3]
        y = func(x, params)
        e = fill(0.1, length(x))
        y .+= e .* randn(rng, length(x))

        # Fit with CMPFit
        count = 0
        if resid
            res1 = cmpfit(x, fill(0., length(x)), fill(1., length(x)), residuals, params)
        else
            res1 = cmpfit(x, y, e, func, params)
        end
        count_CMPFit = count

        # Fit with LsqFit
        count = 0
        if resid
            res2 = curve_fit(residuals, x, fill(0., length(x)), params)
        else
            res2 = curve_fit(func, x, y, (1 ./ e).^2, params)
        end
        count_LsqFit = count
        ratio = count_LsqFit / count_CMPFit

        if ii == 1
            @printf "First iteration:\n"
            @assert res1.dof == res2.dof            
            @printf "%7s | %10s: %10d   %10s: %10.6f\n" "CMPFit" "Count" count_CMPFit "Red. χ^2" sum(abs2, (func(x, res1.param) - y) ./ e) / res1.dof
            @printf "%7s | %10s: %10d   %10s: %10.6f\n" "LsqFit" "Count" count_LsqFit "Red. χ^2" sum(abs2, (func(x, res2.param) - y) ./ e) / res2.dof
            @printf "%7s | %10s:" "CMPFit" "Best fit"
            for i in 1:length(params)
                @printf "%10.6f " res1.param[i]
            end
            println()
            @printf "%7s | %10s:" "LsqFit" "Best fit"
            for i in 1:length(params)
                @printf "%10.6f " res2.param[i]
            end
            println()
            @printf "%7s | %10s:" "CMPFit" "Uncert."
            for i in 1:length(params)
                @printf "%10.6f " res1.perror[i]
            end
            println()
            @printf "%7s | %10s:" "LsqFit" "Uncert."
            for i in 1:length(params)
                @printf "%10.6f " standard_error(res2)[i]
            end
            println()
        end
        append!(N_cmpfit, (res1.param .- params) ./ res1.perror)
        append!(N_lsqfit, (res2.param .- params) ./ standard_error(res2))
    end
    N_cmpfit = reshape(N_cmpfit, length(params), nloop)
    N_lsqfit = reshape(N_lsqfit, length(params), nloop)
    @printf "\nN = (best_fit_value - true_value) / fit_uncertainty\n"
    @printf "%7s | %10s:" "CMPFit" "mean(N)"
    for i in 1:length(params)
        @printf "%10.6f " mean(N_cmpfit[i,:])
    end
    println("  (should be ~0)")
    @printf "%7s | %10s:" "LsqFit" "mean(N)"
    for i in 1:length(params)
        @printf "%10.6f " mean(N_lsqfit[i,:])
    end
    println("  (should be ~0)")
    @printf "%7s | %10s:" "CMPFit" "std(N)"
    for i in 1:length(params)
        @printf "%10.6f " std(N_cmpfit[i,:])
    end
    println("  (should be ~1)")
    @printf "%7s | %10s:" "LsqFit" "std(N)"
    for i in 1:length(params)
        @printf "%10.6f " std(N_lsqfit[i,:])
    end
    println("  (should be ~1)")
end
pkofod commented 5 years ago

Wonderful script, thanks! Was this run against master or release?

gcalderone commented 5 years ago

LsqFit version 0.6.0

gcalderone commented 5 years ago

I just tried with master, here are the results:

julia> compare()
First iteration:
 CMPFit |      Count:         23     Red. χ^2:   1.072366
 LsqFit |      Count:         93     Red. χ^2:   1.072366
 CMPFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 LsqFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 CMPFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 
 LsqFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 

N = (best_fit_value - true_value) / fit_uncertainty
 CMPFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 LsqFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 CMPFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)
 LsqFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)

julia> compare(resid=true)
First iteration:
 CMPFit |      Count:         23     Red. χ^2:   1.072366
 LsqFit |      Count:         93     Red. χ^2:   1.072366
 CMPFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 LsqFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 CMPFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 
 LsqFit |    Uncert.:  0.026092   0.030382   0.045844   0.075090 

N = (best_fit_value - true_value) / fit_uncertainty
 CMPFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 LsqFit |    mean(N):  0.004726   0.022502  -0.002308  -0.022491   (should be ~0)
 CMPFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)
 LsqFit |     std(N):  1.006822   0.995180   1.016948   1.015014   (should be ~1)

Issue 2 seems solved in master. Issue 1, 3 and 4 are still there.

pkofod commented 5 years ago

The fact that issue 2 is gone is very good to hear. I'll have to think about the other issues.

pkofod commented 5 years ago

I made it easier to specify forward differencing for finite differenced jacobians, as this is one place where LsqFit differs from many other programs, that default (or require) forward differencing. Since we use central differencing, the f calls will be inflated. https://github.com/JuliaNLSolvers/NLSolversBase.jl/pull/98 We just need it to pass and tag a feature release for NLSolversBase.

pkofod commented 5 years ago

It should now be possible to chose :finiteforward as autodiff keyword to improve the number of f calls (at the expense of some accuracy at least in theory)

edit scratch that. We're not re-using the already calculated F, so it doesn't currently improve. You have to be able to provide an already calculated Fx here https://github.com/JuliaDiffEq/DiffEqDiffTools.jl/blob/2dc3c201ad0b1c568ce204f409ec2806000b32e4/src/jacobians.jl#L123 for it to improve. I can special case it on our end if they don't want to do it, by grabbing F from fx in the cache in a combined value_jacobian! call.

edit2 Btw, here's the updated script


using CMPFit, LsqFit, Random, Printf, Statistics

function compare(;nloop=10000, resid=false, lsqfit_autodiff=:finite)
    # Variable to count function evaluations
    count = 0
    ndata = 100
    x = Vector{Float64}(undef, ndata)
    y = Vector{Float64}(undef, ndata)
    e = Vector{Float64}(undef, ndata)

    function func(x, p)
        count += 1
        return p[1] .+ p[2] * exp.(-0.5 .* ((x .- p[3]) ./ p[4]).^2)
    end
    residuals(x, p) = ((func(x, p) .- y) ./ e)

    # "True" parameter values
    params = [2.0, 3.0, 4.0, 5.0]

    # Accumulate data on accuracies in parameter estimation
    N_cmpfit = Vector{Float64}()
    N_lsqfit = Vector{Float64}()

    rng = MersenneTwister(0) # Initialize random generator
    for ii in 1:nloop
        # Generate data
        x = collect(range(-3., stop=3., length=ndata) .* params[4] .+ params[3])
        y = func(x, params)
        e = fill(0.1, length(x))
        y .+= e .* randn(rng, length(x))

        # Fit with CMPFit
        count = 0
        if resid
            res1 = cmpfit(x, fill(0., length(x)), fill(1., length(x)), residuals, params)
        else
            res1 = cmpfit(x, y, e, func, params)
        end
        count_CMPFit = count

        # Fit with LsqFit
        count = 0
        if resid
            res2 = curve_fit(residuals, x, fill(0., length(x)), params; autodiff=lsqfit_autodiff)
        else
            res2 = curve_fit(func, x, y, (1 ./ e).^2, params; autodiff=lsqfit_autodiff)
        end
        count_LsqFit = count
        ratio = count_LsqFit / count_CMPFit

        if ii == 1
            @printf "First iteration:\n"
            @assert res1.dof == dof(res2)            
            @printf "%7s | %10s: %10d   %10s: %10.6f\n" "CMPFit" "Count" count_CMPFit "Red. χ^2" sum(abs2, (func(x, res1.param) - y) ./ e) / res1.dof
            @printf "%7s | %10s: %10d   %10s: %10.6f\n" "LsqFit" "Count" count_LsqFit "Red. χ^2" sum(abs2, (func(x, coef(res2)) - y) ./ e) / dof(res2)
            @printf "%7s | %10s:" "CMPFit" "Best fit"
            for i in 1:length(params)
                @printf "%10.6f " res1.param[i]
            end
            println()
            @printf "%7s | %10s:" "LsqFit" "Best fit"
            for i in 1:length(params)
                @printf "%10.6f " coef(res2)[i]
            end
            println()
            @printf "%7s | %10s:" "CMPFit" "Uncert."
            for i in 1:length(params)
                @printf "%10.6f " res1.perror[i]
            end
            println()
            @printf "%7s | %10s:" "LsqFit" "Uncert."
            for i in 1:length(params)
                @printf "%10.6f " stderror(res2)[i]
            end
            println()
        end
        append!(N_cmpfit, (res1.param .- params) ./ res1.perror)
        append!(N_lsqfit, (coef(res2) .- params) ./ stderror(res2))
    end
    N_cmpfit = reshape(N_cmpfit, length(params), nloop)
    N_lsqfit = reshape(N_lsqfit, length(params), nloop)
    @printf "\nN = (best_fit_value - true_value) / fit_uncertainty\n"
    @printf "%7s | %10s:" "CMPFit" "mean(N)"
    for i in 1:length(params)
        @printf "%10.6f " mean(N_cmpfit[i,:])
    end
    println("  (should be ~0)")
    @printf "%7s | %10s:" "LsqFit" "mean(N)"
    for i in 1:length(params)
        @printf "%10.6f " mean(N_lsqfit[i,:])
    end
    println("  (should be ~0)")
    @printf "%7s | %10s:" "CMPFit" "std(N)"
    for i in 1:length(params)
        @printf "%10.6f " std(N_cmpfit[i,:])
    end
    println("  (should be ~1)")
    @printf "%7s | %10s:" "LsqFit" "std(N)"
    for i in 1:length(params)
        @printf "%10.6f " std(N_lsqfit[i,:])
    end
    println("  (should be ~1)")
end

with example output

julia> compare(resid=true, lsqfit_autodiff=:forwarddiff)
First iteration:
 CMPFit |      Count:         23     Red. χ^2:   1.072366
 LsqFit |      Count:         21     Red. χ^2:   1.072366
 CMPFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 LsqFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 CMPFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 
 LsqFit |    Uncert.:  0.026092   0.030382   0.045844   0.075090 

N = (best_fit_value - true_value) / fit_uncertainty
 CMPFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 LsqFit |    mean(N):  0.004726   0.022502  -0.002308  -0.022491   (should be ~0)
 CMPFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)
 LsqFit |     std(N):  1.006822   0.995180   1.016948   1.015014   (should be ~1)

julia> compare(resid=true, lsqfit_autodiff=:finiteforward)
First iteration:
 CMPFit |      Count:         23     Red. χ^2:   1.072366
 LsqFit |      Count:         93     Red. χ^2:   1.072366
 CMPFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 LsqFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 CMPFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 
 LsqFit |    Uncert.:  0.026092   0.030382   0.045844   0.075090 

N = (best_fit_value - true_value) / fit_uncertainty
 CMPFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 LsqFit |    mean(N):  0.004726   0.022502  -0.002308  -0.022491   (should be ~0)
 CMPFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)
 LsqFit |     std(N):  1.006822   0.995180   1.016948   1.015014   (should be ~1
pkofod commented 5 years ago

with https://github.com/JuliaLang/METADATA.jl/pull/21268 we should now be able to re-use F(x) just like in mpfit for the forward difference jacobian approximation. I just need to re-order some things, possibly only in NLSolversBase.

gcalderone commented 5 years ago

Hi, thank you for all your work!! To summarize the current situation, with LsqFit v0.7.1 and autodiff=:forwarddiff:

I'm a little confused with the autodiff keyword and its possible values. It is not clear to me which value activates automatic differentiation, forwarddiff or finiteforward? Likely forwarddiff, but this is the case where the number of function evaluations ~match those of CMPFit (which uses finite differentiation). Hence I would expect the opposite...

Also, if I want finite differentiation why should I use the autodiff keyword? Finally, repeating the forward token in all cases seems a little redundant...

pkofod commented 5 years ago

Likely forwarddiff, but this is the case where the number of function evaluations ~match those of CMPFit (which uses finite differentiation). Hence I would expect the opposite...

:forwarddiff uses forward mode automatic differentiation (https://en.wikipedia.org/wiki/Automatic_differentiation#Forward_accumulation) to calculate the Jacobian. It will give the same number of residual/model evaluation as mpfit's forward finite differences approximation. The one from evaluating the step, and an additional one for the Jacobian estimation. We can do the same with finite differences now that DiffEqDiffTools has been update, but really you should just use :forwarddiff if you model function supports it. If we always calculated the Jacobian and residual value at the proposed x, we could potentially decrease the number of function evaluations under the condition that most steps are accepted.

Also, if I want finite differentiation why should I use the autodiff keyword? Finally, repeating the forward token in all cases seems a little redundant...

It's just always been called autodiff (the keyword). In principle, there could be a autodiff and a finitediff keyword. Not sure it'll make things easier for the users.

:finiteforward is the finite difference forward approximation, forwarddiff is the ForwardDiff.jl forward mode automatic differentiation functionality for calculating a jacobian. It should/will be documented (better).

pkofod commented 5 years ago

Latest versions of the packages:

julia> compare(lsqfit_autodiff=:finiteforward)
First iteration:
 CMPFit |      Count:         23     Red. χ^2:   1.072366
 LsqFit |      Count:         47     Red. χ^2:   1.072366
 CMPFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 LsqFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 CMPFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 
 LsqFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 

N = (best_fit_value - true_value) / fit_uncertainty
 CMPFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 LsqFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 CMPFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)
 LsqFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)

julia> compare(lsqfit_autodiff=:forwarddiff)
First iteration:
 CMPFit |      Count:         23     Red. χ^2:   1.072366
 LsqFit |      Count:         20     Red. χ^2:   1.072366
 CMPFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 LsqFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 CMPFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 
 LsqFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 

N = (best_fit_value - true_value) / fit_uncertainty
 CMPFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 LsqFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 CMPFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)
 LsqFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)

julia> compare()
First iteration:
 CMPFit |      Count:         23     Red. χ^2:   1.072366
 LsqFit |      Count:         91     Red. χ^2:   1.072366
 CMPFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 LsqFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 CMPFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 
 LsqFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 

N = (best_fit_value - true_value) / fit_uncertainty
 CMPFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 LsqFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 CMPFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)
 LsqFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)

julia> compare(re)^C

julia> compare(resid=true; lsqfit_autodiff=:finiteforward)
First iteration:
 CMPFit |      Count:         23     Red. χ^2:   1.072366
 LsqFit |      Count:         47     Red. χ^2:   1.072366
 CMPFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 LsqFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 CMPFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 
 LsqFit |    Uncert.:  0.026092   0.030382   0.045844   0.075090 

N = (best_fit_value - true_value) / fit_uncertainty
 CMPFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 LsqFit |    mean(N):  0.004726   0.022502  -0.002308  -0.022491   (should be ~0)
 CMPFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)
 LsqFit |     std(N):  1.006822   0.995180   1.016948   1.015014   (should be ~1)

julia> compare(resid=true; lsqfit_autodiff=:forwarddiff)
First iteration:
 CMPFit |      Count:         23     Red. χ^2:   1.072366
 LsqFit |      Count:         20     Red. χ^2:   1.072366
 CMPFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 LsqFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 CMPFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 
 LsqFit |    Uncert.:  0.026092   0.030382   0.045844   0.075090 

N = (best_fit_value - true_value) / fit_uncertainty
 CMPFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 LsqFit |    mean(N):  0.004726   0.022502  -0.002308  -0.022491   (should be ~0)
 CMPFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)
 LsqFit |     std(N):  1.006822   0.995180   1.016948   1.015014   (should be ~1)

julia> compare(resid=true)
First iteration:
 CMPFit |      Count:         23     Red. χ^2:   1.072366
 LsqFit |      Count:         91     Red. χ^2:   1.072366
 CMPFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 LsqFit |   Best fit:  2.027822   2.944700   3.911829   4.968451 
 CMPFit |    Uncert.:  0.025196   0.029339   0.044270   0.072512 
 LsqFit |    Uncert.:  0.026092   0.030382   0.045844   0.075090 

N = (best_fit_value - true_value) / fit_uncertainty
 CMPFit |    mean(N):  0.004758   0.021708  -0.001003  -0.022390   (should be ~0)
 LsqFit |    mean(N):  0.004726   0.022502  -0.002308  -0.022491   (should be ~0)
 CMPFit |     std(N):  0.997627   0.985816   1.006746   1.004508   (should be ~1)
 LsqFit |     std(N):  1.006822   0.995180   1.016948   1.015014   (should be ~1)