fortran-lang / stdlib

Fortran Standard Library
https://stdlib.fortran-lang.org
MIT License
1.05k stars 164 forks source link

Optimization, Root finding, and Equation Solvers #87

Open jacobwilliams opened 4 years ago

jacobwilliams commented 4 years ago

I'm willing to spearhead a module or set of modules containing:

I have already modernized some classic codes and there are also some others with permissive licenses floating around. I am willing to merge mine into the stdlib. I need to gather then together and provide a nice interface so that a user can call any of the solvers in a similar way. We can have discussion on what that interface would look like. Here are some example codes:

Eventually we could even provide (optional) integrations with larger third-party and/or commercial libraries such as IPOPT and/or SNOPT

See also

Note that some of these in various Python packages are just wrappers to Fortran 77 codes (such as slsqp).

certik commented 4 years ago

Thank you! This would be super helpful.

Let's discuss the user facing public API.

On Sun, Jan 5, 2020, at 2:01 PM, Jacob Williams wrote:

I'm willing to spearhead a module or set of modules containing:

See also

Note that some of these in various Python packages are just wrappers to Fortran 77 codes (such as slsqp).

rweed commented 4 years ago

I have refactored (ie pasta free) versions of Nodecal and Morales LBFGSB code, Turlach's quadratic programming code, and a couple of nature inspired algorithms (cuckoo search and Particle Swarm) optimizers I can contribute for optimization provided the original licenses for those codes are compatible. I know one is GPL2. I also have my versions of Brent's fmin and zeroin that I got from Frank Carmichaels PDAS site. He refactored the original FMM code towards F90.

rweed commented 4 years ago

Also, forgot that I have a version of the Nelder-Mead simplex algorithm I can contribute.

epagone commented 4 years ago

@jacobwilliams how about MATH77? Those are amazing pieces of code...

Concerning (optional) larger third-party libraries, I think FGSL deserves consideration even if there might be licence issues. Hopefully, they can be circumvented making its installation optional.

scivision commented 4 years ago

a set of functions one would find on a handheld scientific calculator in Fortran: https://github.com/scivision/rpn-calc-fortran#other-functions

jacobwilliams commented 4 years ago

@rweed Absolutely, we can use them! Although I think any GPL licensed code would be off limits since we don't want to make the whole library GPL, right?

@epagone Yep, we can check Math77 for anything we can use. That has a permissive license.

jacobwilliams commented 4 years ago

@scivision Is that for a different topic? I think we should have a function parser in stdlib, that could be used there. I can make a new ticket for that.

certik commented 4 years ago

The hard part and the main contribution of stdlib is to agree on a well designed API, that is approved, accepted and adopted by the wide Fortran community. The stdlib library then provides both this specification of the API, as well as a reference implementation.

I agree the actual implementation can and should use well tested available code whenever possible (e.g. by porting the SciPy Fortran code into stdlib), instead of reinventing our own code. We should only write our own numerical code if there is no available permissive licensed code out there that would work. Or if we discover some bugs, or numerical issues (which can happen).

Yes, we can't use GPLed code, only permissive licenses like MIT or BSD.

jacobwilliams commented 4 years ago

My thinking is that it would be a high-level object oriented interface. Something like what I did for slsqp. I think this is the best way for modern codes of this type. The alternatives (forcing you to stuff your user data into giant work arrays, common blocks, or reverse communication, are unsatisfactory as far as I'm concerned).

There would be procedures for:

There would be methods a user would have to provide:

There would also be some mechanism for stopping the solve process (either within the problem or report function).

Other notes:

rweed commented 4 years ago

@jacobwilliams , In my code I usually (but not always - for old code its sometimes not practicle) use an empty abstract class (optimizers_t) and derive all of the base classes from that. I augment that with an abstract objective functions class. I think at a minimum if you go OO you will need something like these two classes.

Also, I could use a little guidance from those of you who are wise in the ways of software licenses. If I take a code written in another language (MATLAB and Python in this case) and completely rewrite it in Fortran (using none of the original source, just using the implementation as a guide) does the Fortran code inherit the license restrictions of the original code albeit in another language.

Actually, we probably need some sort of licensing guide that covers these kinds of issues and goes over whats usable and whats not (including copyrighted material from books etc)

scivision commented 4 years ago

@jacobwilliams the functions are a bunch of polymorphic math functions that can readily be adapted in stdlib.

The simple REPL it has is a separate idea, if a pure Fortran REPL would be of interest.

certik commented 4 years ago

If I take a code written in another language (MATLAB and Python in this case) and completely rewrite it in Fortran (using none of the original source, just using the implementation as a guide) does the Fortran code inherit the license restrictions of the original code albeit in another language.

If you take program in one language and rewrite it to another language, that is considered derivative work. So if the original is, say, GPL licensed, then your derivative work must also be GPL licensed. If you want to avoid that, you can do a so called clean room implementation: one person write a specification based on the original code, and another person takes that specification and creates a new implementation based on that without ever looking at the original code.

My thinking is that it would be a high-level object oriented interface.

@jacobwilliams besides a high level interface, would you be against also having a low level non-OO API in stdlib if others volunteer it?

scivision commented 4 years ago

A library for geodesy, geographic coordinate transforms for sensors, satellites, etc https://github.com/scivision/maptran3d

jacobwilliams commented 4 years ago

@rweed That sounds similar to what I was thinking. Do you have any of your examples in a repo somewhere I could examine? I hope to find some time soon to start working up an example.

@certik I have no objection to a low-level interface. I think in some cases it would be fairly easy to make. FYI: if you look at the SLSQP code, you will notice that the original reverse-communication interface is still there, and that's what the oo interface calls. However, I don't want to impose this pattern on all the solvers, since it makes for somewhat weird and not straightforward code in my opinion. I think for a "simple" interface to an oo solver, all we'd need to do is make a subroutine where everything is passed in as arguments (including non-type bound functions), and then the class is just created and run inside this subroutine. It would be easy to do I think... but the devil is in the details...

certik commented 4 years ago

@jacobwilliams perfect, I agree. Where there's a will, there's a way. I will support you (and anybody) on a nice OO interface, and if you support me (and others) on a nice low level non OO interface, then I think we all, as a community, will get what we want.

epagone commented 4 years ago

Another issue is to choose the algorithms to make available to solve a certain problem.

E.g., for popular problems like one-dimensional root-finding there is plenty of choice (I'm afraid too much): there are some algorithms available in MATH77, @jacobwilliams has refactored and modernised zeroin and I am pretty sure there could be other good implementations already available (see John Burkardt's large collection). For the numerical integration of one-dimensional functions there are many strategies and implementations available, etc.

Should we expose through the API all the routines available or we'll choose the "best" ones?

certik commented 4 years ago

@epagone excellent question. I would allow the user to select the method, with some unified API. Let's look at how other projects do this, and then we can discuss what the best API would be.

SciPy

https://docs.scipy.org/doc/scipy/reference/optimize.html#root-finding

It looks like they have a root_scalar function that accepts a method argument:

    root_scalar(method=’brentq’)
    root_scalar(method=’brenth’)
    root_scalar(method=’bisect’)
    root_scalar(method=’ridder’)
    root_scalar(method=’newton’)
    root_scalar(method=’toms748’)
    root_scalar(method=’secant’)
    root_scalar(method=’halley’)

and in addition, they provide functions for each algorithm, such as brentq, ridder, etc.

Matlab

Matlab has fzero, which seems to be just one particular method. Then they have a general method solve where one can set a solver argument. See also the section "Algorithms", but for root finding, it looks like they only have one method.

Julia

Julia has a separate package Roots.jl for this, which provides both Matlab like fzero and their own find_zero, which accepts a method as an argument (such as Bisection() or Newton()). The Roots.jl package was started as a result of this issue: https://github.com/JuliaLang/julia/issues/2447, where you can find the initial discussion about the API.

epagone commented 4 years ago

FWIW I vote for the SciPy interface, i.e. a string argument that selects the method to call. Clean and modular. Maybe we still need to select the "best" algorithm (that will be the default choice) and make the string argument optional.

rweed commented 4 years ago

@jacobwilliams I don't have the code in a repository but the following will give you an example of how I implement the abstract classes. I'll be the first to admint there is a lot of room for improvement but this works for my current applications.

An empty abstract class that is the parent class for all other optimizers. Probably could add some abstract methods but that forces you to implement all the abstract methods even if you don't need them all optimizerClass.F90.txt

module with abstract classes for objective functions and constraints

objectiveFunctions.F90.txt

The basic class and subroutine interface for my cuckoo search code based on Walton's (see comments in module) modified cuckoo search MATLAB code which is unfortunately GPL2. I pass the objective function class as an argument instead of including it as a class data via a procedure pointer. I have generic methods (optimizer, init, and dealloc) to do the basic things you need for initialization etc.

cuckooSearch.F90.txt

A set of standard test cases for unconstrained optimization illustrating use of objective function class

optTestFunctions.f90.txt

Finally a test program for the cuckoo search routines. Note the actual code uses a lot of utilities for random numbers and my implementation of various MATLAB functions

testMCS.f90.txt

ivan-pi commented 4 years ago

The Boost C++ library simply provides the routines under different names:

// Bracket and Solve Root template <class F, class T, class Tol> std::pair<T, T> bracket_and_solve_root( F f, const T& guess, const T& factor, bool rising, Tol tol, boost::uintmax_t& max_iter);

// TOMS 748 algorithm template <class F, class T, class Tol> std::pair<T, T> toms748_solve( F f, const T& a, const T& b, Tol tol, boost::uintmax_t& max_iter);

- Root Finding With Derivatives
```C++
// Newton-Raphson
template <class F, class T>
T newton_raphson_iterate(F f, T guess, T min, T max, int digits, boost::uintmax_t& max_iter);

// Halley
template <class F, class T>
T halley_iterate(F f, T guess, T min, T max, int digits, boost::uintmax_t& max_iter);

// Schr'''&#xf6;'''der
template <class F, class T>
T schroder_iterate(F f, T guess, T min, T max, int digits, boost::uintmax_t& max_iter);

Personally, I think the SciPy way fits to Fortran best, we provide both a root_scalar/ find_zero function where you specify the method using a string, and also a set of functions for each algorithm.

The find_zero can then pick a default method based on the arguments. In SciPy the default is Brent's algorithm for bracketed roots and Newton or secant if not; see the excerpt from SciPy root_scalar below:

    # Pick a method if not specified.
    # Use the "best" method available for the situation.
    if not method:
        if bracket:
            method = 'brentq'
        elif x0 is not None:
            if fprime:
                if fprime2:
                    method = 'halley'
                else:
                    method = 'newton'
            else:
                method = 'secant'
    if not method:
        raise ValueError('Unable to select a solver as neither bracket '
                         'nor starting point provided.')

The root solvers are a good project to test some type templating mechanism like Jin2For. Having to modify both the single and double precision versions of each solver would be annoying

epagone commented 4 years ago

I agree with almost everything written by @ivan-pi. The only variation that I would suggest is to make Brent's method the default for scalar root-finding without derivatives and Newton-Raphson's algorithm when a derivative is provided (as per the theory). Furthermore, practical implementations rarely code exactly the textbook methods: e.g. some solvers (I don't recall at the moment exactly and I cannot check) use Newton's method with bracketing to detect divergence. I would focus more on the practical implementations to decide on the use-cases, although I understand that we need to give it a clear name to make them selectable.

ivan-pi commented 4 years ago

The NLopt library contains several constrained and unconstrained nonlinear optimization routines and comes with a Fortran interface (geared toward the old F77 style of Fortran). Ironically, many of the routines are translations of old Fortran ones to C using f2c. I find the NLopt API very intuitive and simple to use. I made a pull request with a modern Fortran interface for NLopt a while ago. Maybe it could serve as a template for the stdlib one.

epagone commented 3 years ago

Would root bracketing routines be in scope for this issue? I believe so but I am struggling to find any prior art in this area. I remember some in the Numerical Recipes (that have a well-known troublesome licence) and szero/dzero of MATH77 (that have some partial bracketing embedded to expand the initial search range).

Are you aware of any high-quality bracketing routines available in the wild?

arjenmarkus commented 3 years ago

Usually such algorithms and high-quality implementations can be found on netlib - https://www.netlib.org/.

Op wo 6 jan. 2021 om 12:52 schreef Emanuele Pagone <notifications@github.com

:

Would root bracketing routines be in scope for this issue? I believe so but I am struggling to find any prior art in this area. I remember some in the Numerical Recipes (that have a well-known troublesome licence) and szero/dzero of MATH77 (that have some partial bracketing embedded to expand the initial search range).

Are you aware of any high-quality bracketing routines available in the wild?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/fortran-lang/stdlib/issues/87#issuecomment-755256439, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAN6YR4HH6RVHPQYQYQS3B3SYRFIZANCNFSM4KC5DBQQ .

epagone commented 3 years ago

Usually such algorithms and high-quality implementations can be found on netlib - https://www.netlib.org/. Op wo 6 jan. 2021 om 12:52 schreef Emanuele Pagone <notifications@github.com : Would root bracketing routines be in scope for this issue? I believe so but I am struggling to find any prior art in this area. I remember some in the Numerical Recipes (that have a well-known troublesome licence) and szero/dzero of MATH77 (that have some partial bracketing embedded to expand the initial search range). Are you aware of any high-quality bracketing routines available in the wild? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#87 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAN6YR4HH6RVHPQYQYQS3B3SYRFIZANCNFSM4KC5DBQQ .

I am well aware of netlib but maybe I do not know the right keywords and I couldn't find much. I will check what other languages offer.

epagone commented 3 years ago

After searching a bit about bracketing routines in other languages, I have found only one solution and another partial one.

1) IMO Boost has the best "packaged" solution (untested) with the routine Bracket and Solve Root:

bracket_and_solve_root is a convenience function that calls TOMS 748 algorithm internally to find the root of f(x). It is generally much easier to use this function rather than TOMS 748 algorithm, since it does the hard work of bracketing the root for you. It's bracketing routines are quite robust and will usually be more foolproof than home-grown routines, unless the function can be analysed to yield tight brackets.

TOMS 748 is an improved Brent's method with some additional constraints (see the Notes here).

2) Julia's find_zeros (again, untested) claims to automatically find multiple roots in a specified interval, but I couldn't find anything regarding expanding an interval to try bracket a root.

To my surprise, I could not easily find anything relevant in python.

ivan-pi commented 3 years ago

I would definitely say bracketing is part of this issue. From the looks of it, the Julia find_zeros will attempt to find multiple roots on a defined interval, by dividing that interval into sub-intervals and checking for a difference in sign. It doesn't attempt to automatically expand the interval in search of a sign difference.

I guess the reason you cannot find much prior art is that writing at least a simple bracket search is not that hard. For a fixed interval you divide it into smaller pieces. For an open interval you expand (e.g. by multiply the interval with a constant factor). In applications where I required a root-finder my problems typically had some "intrinsic" brackets (e.g. related to known solution limits), or I had a good initial guess (e.g. from a previous time step), meaning I could safely use an iterative method without bracketing.

I think the way Boost does it is most flexible, either you can call the root-finding routine directly (on your interval of choice directly), or you allow the root-finder to expand/shrink the bracket as necessary. I would be in favor of having this. I would point out however, the Boost functions is limited to monotonic functions. Such bracketing routines will also fail at double roots (e.g. like in case of the function x^2). Unfortunately, no design will be completely foolproof.

I also found the NAG library has a solution: NAG FL Interface - C05 (Roots)

The IMSL Fortran library on the other hand only provides one bracketed root-finding method (Brent's) and one iterative one (Müller's method): IMSL - Chapter 7: Nonlinear equations

arjenmarkus commented 3 years ago

Wrt Netlib: I searched for "Newton" and came across quite a few references. I also have a number of the minimisation routines developed by Mike Powell. Probably of less relevance for the standard library is a collection of articles on probabilistic optimisation algorithms (I have implemented them in Tcl, my other favourite language, based on these articles). I can also dig up some other stuff concerning root finding and the like. Mind you: it is one thing to have an algorithm, it is quite another to ensure that they always provide an answer (or detect that things are going wrong).

It might also be a good idea to consider "automatic differentiation" as the basis for algorithms that require (first-order) derivatives, like Newton-Raphson.

Op do 7 jan. 2021 om 00:54 schreef Ivan Pribec notifications@github.com:

I would definitely say bracketing is part of this issue. From the looks of it, the Julia find_zeros will attempt to find multiple roots on a defined interval, by dividing that interval into sub-intervals and checking for a difference in sign. It doesn't attempt to automatically expand the interval in search of a sign difference.

I guess the reason you cannot find much prior art is that writing at least a simple bracket search is not that hard. For a fixed interval you divide it into smaller pieces. For an open interval you expand (e.g. by multiply the interval with a constant factor). In applications where I required a root-finder my problems typically had some "intrinsic" brackets (e.g. related to known solution limits), or I had a good initial guess (e.g. from a previous time step), meaning I could safely use an iterative method without bracketing.

I think the way Boost does it is most flexible, either you can call the root-finding routine directly (on your interval of choice directly), or you allow the root-finder to expand/shrink the bracket as necessary. I would be in favor of having this. I would point out however, the Boost functions is limited to monotonic functions. Such bracketing routines will also fail at double roots (e.g. like in case of the function x^2). Unfortunately, no design will be completely foolproof.

I also found the NAG library has a solution: NAG FL Interface - C05 (Roots) https://www.nag.com/numeric/nl/nagdoc_latest/flhtml/c05/c05conts.html

The IMSL Fortran library on the other hand only provides one bracketed root-finding method (Brent's) and one iterative one (Müller's method): IMSL

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/fortran-lang/stdlib/issues/87#issuecomment-755785119, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAN6YR3IE4WF57JEGHMFKLLSYTZ4VANCNFSM4KC5DBQQ .

epagone commented 3 years ago

I guess the reason you cannot find much prior art is that writing at least a simple bracket search is not that hard. For a fixed interval you divide it into smaller pieces. For an open interval you expand (e.g. by multiply the interval with a constant factor).

True, but if you want to do a proper job (that is efficient and takes into account subtle numerical problems), it's not that trivial. Have a look at the algorithm of MATH77(that only looks outside of an interval to bracket a root):

When F1$\times \text{F2} > 0$ at the initial point, iterates are generated according to the formula $x = x{\min } + (x{\min } - x{\max }) \times \rho$, where the subscript ``min" is associated with the $(x, f)$ pair that has the smallest value for $|f|$, and $\rho $ is 8 if $r = f{\min }\:/\:(f{\max } - f{\min }) \geq 8$, else $\rho = \max (\kappa /4, r)$, where $\kappa $ is a count of the number of iterations that have been taken without finding $f$'s with opposite signs. If X1 and X2 have the same value initially (and F1 and F2 equal F(X1)), then the next $x$ is a distance $0.008 + |x{\min }|/4$ from $x{\min }$ taken toward~0. (If X1 = X2 = 0, the next $x$ is $-$.008.)

It is a bit like coding on your own the scalar root finding method: it's not that hard, but to do it properly it's much harder.

In applications where I required a root-finder my problems typically had some "intrinsic" brackets (e.g. related to known solution limits), or I had a good initial guess (e.g. from a previous time step), meaning I could safely use an iterative method without bracketing.

Me too...in some cases, but not all.

I think the way Boost does it is most flexible, either you can call the root-finding routine directly (on your interval of choice directly), or you allow the root-finder to expand/shrink the bracket as necessary. I would be in favor of having this. I would point out however, the Boost functions is limited to monotonic functions. Such bracketing routines will also fail at double roots (e.g. like in case of the function x^2). Unfortunately, no design will be completely foolproof.

Agreed

I also found the NAG library has a solution: NAG FL Interface - C05 (Roots)

The IMSL Fortran library on the other hand only provides one bracketed root-finding method (Brent's) and one iterative one (Müller's method): IMSL - Chapter 7: Nonlinear equations

Nice, thanks.

jacobwilliams commented 3 years ago

OK, just to get something started, I have: https://github.com/jacobwilliams/opt-lib

Currently, this has methods for non-derivative scalar root finding for a bracketed interval. The methods are in classes, but I also have a single function interface (root_scalar). Example:

  call root_scalar('pegasus',func,x1,x2,xzero,fzero,iflag)

Methods are:

This is not final by any means and comments are welcome! I also have to figure out the preprocessor thing to get it to work with different real kinds.

rweed commented 3 years ago

@jacobwilliams

You might also want to look at the algorithm in the following reference

T.R. Chandrupatla, " A new hybrid quadratic/bisection algorithm for finding the zero of a nonlinear function without derivatives," Advances in Engineering Software, Vol 28, 1997, pp. 145-149.

It's similar to Brents method but adds some constraints that are suppose to improve performance for functions with repeated roots by elminating some of the bisection evaluations. The code in the article is BASIC but its easy to implement in Fortran.

RW

jacobwilliams commented 3 years ago

@rweed Done! Couldn't find the original paper, but got it from here: https://books.google.com/books?id=cC-8BAAAQBAJ&pg=PA95#v=onepage&q&f=false

epagone commented 3 years ago

Might be of interest also TOMS 748 (from scipy):

Algorithm 748 with k=2 is asymptotically the most efficient algorithm known for finding roots of a four times continuously differentiable function. In contrast with Brent’s algorithm, which may only decrease the length of the enclosing bracket on the last step, Algorithm 748 decreases it each iteration with the same asymptotic efficiency as it finds the root.

Reference: Alefeld, G. E. and Potra, F. A. and Shi, Yixun, Algorithm 748: Enclosing Zeros of Continuous Functions, ACM Trans. Math. Softw. Volume 221(1995) doi = {10.1145/210089.210111}

PS: I don't know about potential license implications

rweed commented 3 years ago

@jacobwilliams

I have created a github repository to hold some utility routines I've written over the years and have included my implementations of the Brent and Chandruptla root finders. see: https://github.com/rweed/ModForUtils/root_finders. The files there contain two implementations (each) of the Brent and Chandruptla codes along with the nine test functions described in Chandrupatla's article and a test program that compares the two methods for two different intervals around the desired root. Feel free to modify the code as needed (and change the license) if you wish to include the Chandrupatla code in your stdlib code. Note, the versions of the Brent and Chandruptla solvers differ only in the way I pass the desired functions. The first way defines an "univariate" function class that wraps calls to the test functions and is mostly for testing purposes. The second way passes a procedure argument based on an abstract interface.

The rest of the utilities in the repository are some things I've used extensively in working codes and do things like compute binomial coefficients and factorials, test for NaN and Infinity floating point exceptions, check for "almost" equality of floating point numbers, provide support for various random number tasks, and provide my implementations of various array sorters (quicksort, heapsort, mergesort, insertion sort, and counting sort). The repository is a "work in progress" and I intend to include all the code I can that's not restricted by copyrights or GPL licenses etc. or the ITAR restrictions imposed by some of my DoD sponsored work.

jacobwilliams commented 3 years ago

OK, so now I have:

I think this is the most comprehensive set of bracketing root finders in any library anywhere. I've even included some things i've not seen anywhere else (such as an option for all methods to fallback to bisection if the method fails, and also some protections against evaluating the function outside the original bounds).

I'm using an #include file scheme to provide real32, real64, and real128 versions of the classes, and an interface block for the root_scalar function. I was loathe to clutter up the code with those fypp directives, which make the code non-standard and thus break the IDE that I use. But I think this does the same thing right? Acceptable or not? See here: https://github.com/jacobwilliams/opt-lib/blob/master/src/stdlib_root_module.F90

P.S. For the classes, what I would really want to use is a parametrized derived type. But that feature seems mostly useless since I don't see any way to get the kind input parameter to flow down into constant real params that are in all the various routines. Is there a way to do that? So instead, I just define root_solver_real32, root_solver_real64, etc.

arjenmarkus commented 3 years ago

Impressive list, indeed :). I have not looked at the code yet, but I do intend to.

As a side remark: if you want to get rid of the #include construction, you can also do this in a pure Fortran way. I have used that in the PLplot project (http://plplot.sf.net - all the gory details in https://sourceforge.net/p/plplot/plplot/ci/master/tree/bindings/fortran/). What it boils down to is:

Op wo 11 aug. 2021 om 04:56 schreef Jacob Williams @.***

:

OK, so now I have:

  • bisection
  • brent
  • brentq
  • brenth
  • regula_falsi
  • illinois
  • anderson_bjorck
  • anderson_bjorck_king
  • ridders
  • pegasus
  • bdqrf
  • muller
  • chandrupatla
  • toms748 [not sure if we'll be able to keep this one due to the license]
  • zhang
  • blendtf

I think this is the most comprehensive set of bracketing root finders in any library anywhere. I've even included some things i've not seen anywhere else (such as an option for all methods to fallback to bisection if the method fails, and also some protections against evaluating the function outside the original bounds).

I'm using an #include file scheme to provide real32, real64, and real128 versions of the classes, and an interface block for the root_scalar function. I was loathe to clutter up the code with those fypp directives, which make the code non-standard and thus break the IDE that I use. But I think this does the same thing right? Acceptable or not? See here: https://github.com/jacobwilliams/opt-lib/blob/master/src/stdlib_root_module.F90

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/fortran-lang/stdlib/issues/87#issuecomment-896460485, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAN6YRY6GMDMWYR7XNEA5M3T4HRFTANCNFSM4KC5DBQQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

epagone commented 3 years ago

@jacobwilliams Amazing piece of work. Thank you very much!

I have "spot-tested" in the past weeks your implementations as they were becoming available and they work like a charm for my typical use cases.

One (maybe minor) consideration that it might be worth highlighting at this point with the proponents of the "non-OO" interface (i.e. root_scalar) is the fact that root_scalar is just a wrapper around the OO interface. This is unusual since typically the OO interface is built on top of the procedural approach and I suspect that the request to make a procedural interface available was (at least partially) made assuming this is the case. In other words, the expectation was that the procedural interface would have been simpler and faster, with "more direct" access to the core algorithms. This does not seem to be the case here.

To clarify: I believe that the procedural interface should be available to cater for more programming styles, but I think it should be pointed out that (unusually) the more direct access to the algorithms in this implementation is done through the OO interface.

Exciting times.

jacobwilliams commented 3 years ago

@arjenmarkus Thanks! I'll look at your code. That actually sounds similar to what I did, so maybe we are saying the same thing.

jacobwilliams commented 3 years ago

@epagone Yep, in this case the class-based approach seemed more natural to me...but I'm not entirely happy with it. I'm still going to tinker with it. Also I want to set up a speed test to compare the two approaches.

None of this is final by any means. All comments are welcome!

epagone commented 3 years ago

Yep, in this case the class-based approach seemed more natural to me...

FWIW, same.

None of this is final by any means. All comments are welcome!

That's exactly the reason why I posted my last consideration :wink:

arjenmarkus commented 3 years ago

I have made a comment about the abstract interface func - see issue #1. This will simplify the use of the solver (unless I have overlooked something, that does happen :)).

Op wo 11 aug. 2021 om 20:46 schreef Emanuele Pagone < @.***>:

Yep, in this case the class-based approach seemed more natural to me...

FWIW, same.

None of this is final by any means. All comments are welcome!

That's exactly the reason why I posted my last consideration 😉

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/fortran-lang/stdlib/issues/87#issuecomment-897065227, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAN6YRZYZNHXTHTX6UF36D3T4LARJANCNFSM4KC5DBQQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

Beliavsky commented 8 months ago

There is a new book An Introduction to Optimization on Smooth Manifolds from Cambridge University Press by Nicolas Boumal accompanied by Manopt toolboxes helpful to use Riemannian optimization in Matlab, Python, and Julia. Maybe some functions could be translated to Fortran.