Closed jschueller closed 7 months ago
/cc @nbelakovski
Other than the comment about using the enum type (which occurs in 3 or 4 other places besides the one I mentioned) it looks good to me.
LGTM, I'll push on @zaikunzhang to get this merged and then rebase the callback PR and my python bindings branch
TBH, I personally prefer the version before this PR. The reason is, there will be some users, that do not need all five solvers (like myself). Then one can selectively compile and include some but not all source files in the downstream project.
Using a single entry point prima_minimize
implies I have to compile everything even if I only use one solver.
@TLCFEM A couple questions:
cobyla_c
(or whatever algorithm you're interested in) directly without going through prima_minimize
. You'd lose the convenience of the API with the prima_problem_t
etc., but if you're only interested in one algorithm then maybe that adds a layer of indirection you don't want anyway?I'm also curious as to how you're solving this problem at the moment. Do you have some custom build scripts you use on top of prima to build only the algorithm you want?
It may not be a problem for some, but could be a problem for others. Maybe they simply just do not want to include dead code (dead in the sense that they will not use in their particular use cases). Keeping those solvers indepedent allows linkers to omit unused code, reduce binary size, etc.
Similar logic can be often seen elsewhere. I also do not use boost for the same reason.
I would say adding a unified API while keeping the indepedent APIs would be good. This allows flexibility and for people who care about it and really do not want it, they can just remove the corresponding part.
Currently I am not using it in production. I see the implemention is mostly sequential. I presume it can be easily parallelised by either omp or third-party lapack implementation as I'd like to discuss with you here. I do plan to employ some solvers in large scale (thousands to millions of DoFs) problems. But atm, I am mostly trying small problems, and checking how the performance would be with parallelised linear algebra operations.
You make a good point about boost, but this library won't grow beyond these 5 solvers (in fact it doesn't and won't include COBYQA, an improvement over COBYLA, because the focus of this library is on Powell's solvers).
Also, unlike boost, we have very few external dependencies (in many cases none).
I could see an argument for structuring the code in such a way as to put all of the common code into a common library, and then make separate libraries on top of it for the various algorithms, and then a "master" library on top of all those to just provide all of them, but I think that in the interest of avoiding premature optimization I would prefer to defer that work until such a time as it becomes a problem.
For what it's worth, in the fortran examples, we have makefiles that compile only the necessary algorithm, so at least there is a reference for compiling jsut the algorithm you need.
You make a good point about boost, but this library won't grow beyond these 5 solvers (in fact it doesn't and won't include COBYQA, an improvement over COBYLA, because the focus of this library is on Powell's solvers).
Also, unlike boost, we have very few external dependencies (in many cases none).
I could see an argument for structuring the code in such a way as to put all of the common code into a common library, and then make separate libraries on top of it for the various algorithms, and then a "master" library on top of all those to just provide all of them, but I think that in the interest of avoiding premature optimization I would prefer to defer that work until such a time as it becomes a problem.
For what it's worth, in the fortran examples, we have makefiles that compile only the necessary algorithm, so at least there is a reference for compiling jsut the algorithm you need.
Yeah, I am not very concerned as I can just extract part of the library and modify the c source files accordingly. And one can always directly call the _c
fortran version. Don't get me wrong, a unified API is for sure good. What I'd like to say is that the indepedent APIs offer extra flexibility so maybe they can coexist with the unified one.
Circling back to this, @zaikunzhang and I had a discussion about @TLCFEM 's concerns before merging, @zaikunzhang was going to add a comment to this but I guess it slipped through. The main reason for not exposing the individual algorithms is that it makes it very tedious to do common things like scaling the input. pdfo
, the library for exposing the F77 versions of these algorithms to Python, exposes the individual algorithms as well as a unified one, but according to @zaikunzhang experience has proved this to be a mistake.
We should only provide one single function (e.g. prima
) to the users. Exposing both prima
and individual solvers will substantially complicate the logic of the code in many ways. This is a lesson we learned from PDFO.
This simplifies even further the API with an enum to choose the algorithm instead of the 5 different functions for each algorithm: