Closed WalterMadelim closed 11 months ago
see the png file please.
I'm using Julia 1.9.3.
I create a blank env, and use Pkg.add https//github.....NCNBD.jl to install your package, but cannot precompile it.
It looks like this package doesn't specify a compat bound for SDDP.jl, so it's letting you install incompatible versions of NCNBD.jl and SDDP.jl:
SDDP.jl introduced breaking changes in v0.4.0 in August, 2021:
To fix, you probably need to install and older version of SDDP.jl: import Pkg; Pkg.pkg"add SDDP@0.3"
.
(If you're okay with a suboptimal policy, SDDP.jl can also solve problems with integer variables.)
OK, thanks for your solution! But after I add SDDP@0.3, I found that the package is again not compatible with my gurobi10 (it only supports gurobi9). It seems a bit troublesome to maintain a suitable environment. So, maybe I shall not get NCNBD installed but just learn from the code.
It seems a bit troublesome to maintain a suitable environment
Yeah the "correct" fix for this is for a package to declare which versions it supports: https://github.com/odow/SDDP.jl/blob/fd2b66f5c3108e50c5440aa47b7da09c365cb6e3/Project.toml#L21-L31
It's hard to go back a few years later and fix which set of versions were required.
(It's made harder because two years ago Gurobi 10 was not released.)
So, maybe I shall not get NCNBD installed but just learn from the code.
You might also be interested in
Hi @WalterMadelim, sorry for the delayed response.
I'm sorry that you had issues with installing the package. It's been some time since I have implemented it, and as I did not plan to register it, I haven't really taken care of it in a while - so it's most likely deprecated. As @odow rightly mentioned, I also had not specified which package versions it is compatible with (it was my first project in Julia, so I had not figured that out yet, apparently). I have to see if or when I find the time to update the package to be accessible by others.
If you look at the code and have any questions, just let me know.
In addition to the projects that @odow mentioned, I have outsourced some of the NCNBD.jl work into a new project some while ago: https://github.com/ChrisFuelOR/DynamicSDDiP.jl This does not include the outer loop to linearize occurring non-linear functions, but it includes the idea of the inner loop from NCNBD.jl and many more ideas that I tried out for SDDiP (it also shares some similarities with what my student Leo Schleier works on in his Python package). I'm still developing this project, so it might not be in shape for someone else to try out yet, but you may have a look if you want.
@ChrisFuelOR Hi. I've been reading your paper NCNBD for some time. It seems that you intended to consider nonlinearity only in 1 or 2 dimensions. I'm not very clear about how to make an adaptive refinement with triangulation and longest edge bisection.
Briefly speaking, it's a bit difficult for me to get a clear picture about how to make a piecewise linear relaxation refinement in your outer loop. You suggested reader to [9](solving MINLP using adaptively refined MILPs) in your paper. And [9] again reserve more details to another ref[11](B. Geibler, PHD thesis, FAU2011). Here is the request: I have no access to this reference[11]https://www.researchgate.net/publication/252932944_Towards_Globally_Optimal_Solutions_for_MINLPs_by_Discretization_Techniques_with_Applications_in_Gas_Network_Optimization And I've tried to contact with that author through e-mail about 1 mon ago, but no response. So would you like to share me this reference? Or would you like to depict this PWL process (section 3.4.2 in your NCNBD paper) in a more intuitive way? I've been reading your NCNBD paper with patience and have some point of views such as: should line9 and line10 in Algorithm1 be exchanged? Am I right? So I'll be glad if you are willing to help me!
@WalterMadelim Thanks for your interest in my paper and the implementation. I'm kind of busy right now (and to answer some of your questions I have to delve into my implementation again), but I try to come back to you in more detail in the course of next week.
To answer your first question: Yes, I have only considered non-linearities in 1 or 2 dimensions. The main reason is that defining and refining a triangulation in a higher-dimensional space becomes much more intricate (and I also do not recall if it is even supported by the packages that I used). Therefore, the idea is to split higher-dimensional non-linearities into lower-dimensional ones using an expression graph. This is a common technique in global optimization (where it is usually used to convexify non-linear functions in lower-dimensional spaces for which convex envelopes are known). On the other hand, it complicates the model specification for the user, since I have not automated this step in my implementation.
With respect to the references (and also further questions on the paper), you can also contact me via e-mail. The e-mail address is in the paper.
Thanks for replying. It's evident that you've involved lots of techniques in your NCNBD paper, and that's one of the reasons I'm studying it currently. I'm not hurry and will keep on my research in the field of (Multi-stage) stochastic programming. I'm also available through email (I've sent you one to leave my address waltermadelim@foxmail.com).
Some details noticeable in Section I (Unit commitment problem formulation and results) (page 1019)
Data: - $p_e$: tax on emission. But you use $p^e_g$ in (29) Data: - $\bar r_g$: it seems you duplicate this line to produce the line below it, forgetting to modify 'up' to 'down'. Furthermore, it might be more appropriate to call it "ramp-up limit" rather than "ramp-up rate"?
@WalterMadelim
Thanks again for your questions. I admit that (due to the page limit) we had to significantly shorten the passage on the MILP relaxations in the paper, so some of the steps are indeed not that obvious. Same for the code, which, apart from not being maintained, is also not very easy to access.
In general, our MILP techniques mostly follow ideas from the references by Burlacu et al. [BUR] and Geißler et al. [GEI] (see our paper for the exact references).
Our whole PWL process consist of 4 main ingredients:
Some important things to address with respect to your questions:
As mentioned in my previous post, the code only supports 1D and 2D nonlinear functions. Higher-dimensional functions have to be modeled using an expression graph (note that expression graphs are explained in [GEI, Sect. 4] for instance). I had no time to automize this part in my code, so I had to hard code this kind of reformulation in my test problems (that’s why there are a lot of auxiliary variables in the UC model). The information for each nonlinear function is stored in an object of type NonlinearFunction which is used throughout the algorithm.
In the paper and in the code, we use the refinement strategy proposed in [BUR, Sect. 2.2] to refine a simplex in the current triangulation along the longest edge to refine it. This refinement strategy and the related convergence results do not require Delaunay triangulations. Therefore, it is also not important if the Delaunay property is retained by the refinements. This may be a bit confusing because I use the Delaunay.jl package in my implementation. However, the main reason for this is that before starting the algorithm, you have to come up with an initial triangulation for each nonlinear function. Delaunay.jl seemed to be a good fit to achieve that. You can simply create a uniform grid for the bounded domain of each nonlinear function and then use the function Delaunay.delaunay() to obtain the simplices for an initial triangulation. That’s the only step this package is used for in my code. It plays no role in the later refinements.
In the paper, we mentioned the generalized incremental model as a way to translate the triangulations / piecewise linear approximations to MILP constraints. This model is also used in [BUR, Sect. 2.1]. As your question suggests, in this case, you need to identify a Hamiltonian path. I found this part a bit tricky to implement, so I decided to use a different MILP model in my code. The MILP model that I used is the (generalized) disaggregated logarithmic convex combination model, see [GEI, Sect. 3.1] for instance. For this model, an implementation exists in PiecewiseLinearOpt.jl that I just had to adapt a bit to my specific setting. As is mentioned in [BUR, page 11], when using this alternative MILP model still all the convergence results with respect to the longest-edge bisection rule are satisfied, so we do not lose any important properties by doing so.
Importantly, in our algorithmic setting we have to make sure that the nonlinear functions are approximated in such a way that we obtain not only an approximation, but a relaxation (outer approximation). Therefore, it is required to shift the approximations sufficiently. The hard part is that the required shift has to be known (or at least overestimated) for this, which is a non-convex global optimization problem in itself, see [GEI, Sect. 4]. The good thing is that this complex computation can be avoided sometimes. For instance, if the function is concave, we know that no shifting is required at all.
To the refinement process in particular:
In the outer loop, by solving the master problem, we have computed a trial point. Now, if the loop does not terminate, the aim is to refine the piecewise linear relaxations, as they are not fine enough.
We make refinements for each nonlinear function in the model in each iteration (even though this may also be adapted). Hence, the following steps are repeated for each nonlinear function.
First, we identify the simplex in the current triangulation which contains the current trial point. The reason behind this is that we do not want to refine the triangulation / piecewise linear approximation everywhere, including suboptimal regions, but only around the current trial point.
Second, we divide a longest edge of this selected simplex at its midpoint in order to define two new simplices. With these two new simplices, the triangulation is updated. All the other simplices in the triangulation remain unchanged.
For the updated triangulation, an updated MILP model is formulated.
Again, a sufficient shift is determined such that the MILP model leads to a relaxation.
An important thing to note here is that, even if you improve the piecewise linear approximations (by splitting up one simplex), the piecewise linear relaxations may not improve. In fact, they may become worse. This is due to the shifting step (see above). We have to take care of this to avoid that previously generated cuts become invalid. To ensure that the relaxations improve with each refinement, we may have to keep earlier approximations as well.
I hope this clarifies some of your questions.
Some details noticeable in Section I (Unit commitment problem formulation and results) (page 1019)
Data: - pe: tax on emission. But you use pge in (29) Data: - r¯g: it seems you duplicate this line to produce the line below it, forgetting to modify 'up' to 'down'. Furthermore, it might be more appropriate to call it "ramp-up limit" rather than "ramp-up rate"?
You are right. In the model description there are several typos. I noticed this myself as well when I re-read the paper some time ago.
It is also correct that lines 9 and 10 in Algorithm 1 have to be swapped. The upper bound is, of course, updated once the whole for-loop has terminated.
I hope this clarifies some of your questions.
Yes, your answer is comprehensive, the backgrounds are solid and the research is meticulous. It's a pleasure to read your paper. I studied stochastic programming by myself for the first half of this year and started to read many latest papers, among which I picked your NCNBD as my main branch of study. This decision proves to be judicious enough since I really have broadened my horizon abundantly.
After grasping your main idea in July, I decided to learn the Julia Language, the JuMP and Github usages etc. I'm currently learning the modeling framework and solution techniques supported by https://github.com/odow/SDDP.jl/issues/676#issuecomment-1732473445. I think it will be soon before I come back to have a deeper understanding of your paper and illustrations. I hope I'll have the ability to realize the NCNBD algorithm with my knowledge acquired these days.
The way you do mathematic researches makes a deep impression on me, which strengthens the (nice) commonsense in my mind that Germans are rigorous 🙂. By the way, I'm not German (nor Western) ("Walter" is from Rudin whom I learned math from) and "Madelim" is christened by myself. English is not my native language, so if any expressions not decent, please forget it.
Sincerely thanks for the answers! I'm looking forward to learn about any newly released works that you've participated in. Best wishes to you, Mr. Fullner @ChrisFuelOR : )
Hi, Mr. Füllner @ChrisFuelOR . I'm now a lot more clear about the PWL process in your paper, and I've tried the Unit Commitment (Eq(28)-Eq(39)) in your paper, in my repo UnitComm, where the data for the generators is from your example UC_24_3.jl. I solve it directly as a large MINLP initially, using SCIP (you mentioned SCIP doesn't support sinusoidal functions, but I find it does currently.), and it gives an optimal solution in about 1 min.
Then I'd like to try the decomposition method proposed in your paper. Now I have a question:
x[g,t]
and y[g,t]
. Thus I guess there is no need to perform a more complicated triangulation approximation in 2-dim space. So did you provide an instance somewhere in you paper or in this Github repo, that is proper for us to test the triangulation method for 2-dim non-linearity?I've read through the reference [9. BUR2020], especially section 4 and 5, which models a gas network with compressors. I think one most crucial part is the characteristic of the turbo compressor in Figure 3. However, I'm frustrated when I read this:
For the characteristic diagrams, we use the convex outer-approximation approach described in [22] in case of a turbo compressor.
Because [22] seems to be a published book of SIAM, which is not directly available for me. And I don't know indeed what convex outer-approximation indicates in detail. After all, I'm just looking for an instance where I can test the 2-dim triangulation method.
In comparison, I think this is the nice point of your work: which gives the corresponding data source, even the code, which brings a lot convenience for the readers.