Pyomo / mpi-sppy

MPI-based Stochastic Programming in PYthon
https://mpi-sppy.readthedocs.io
Other
68 stars 39 forks source link

RuntimeError: Found infeasible solution which was feasible before #255

Closed brentertainer closed 2 years ago

brentertainer commented 2 years ago

I am encountering the runtime error from the issue title as the progressive hedging algorithm attempts to terminate. The file below contains the text I see written to the terminal.

winterization.stdout.txt

The particular section of code that raises the error is here.

In those lines of code, I see the comment saying "this should be feasible here; if not we've done something wrong". This check was added a little more than a year ago. Have the developers since encountered this issue? Is it possible that the settings I specify are causing this issue?

DLWoodruff commented 2 years ago

Yes, we have seen this. It usually occurs during finalization while trying to compute the objective function for xhat. It may have something to do with the solver tolerance settings for the xhat spoke versus the hub, but it could be something else. Are you using vanilla and baseparsers? If so, can you send me your command line (to @.***). If not, we might have to zoom. Dave

On Fri, Jun 10, 2022 at 9:24 AM brentertainer @.***> wrote:

I am encountering the runtime error from the issue title as the progressive hedging algorithm attempts to terminate. The file below contains the text I see written to the terminal.

winterization.stdout.txt https://github.com/Pyomo/mpi-sppy/files/8880483/winterization.stdout.txt

The particular section of code that raises the error is here https://github.com/Pyomo/mpi-sppy/blob/5c6b4b8cd26af517ff09706d11751f2fb05b1b5f/mpisppy/cylinders/spoke.py#L355-L359 .

In those lines of code, I see the comment saying "this should be feasible here; if not we've done something wrong". This check was added a little more than a year ago. Have the developers since encountered this issue? Is it possible that the settings I specify are causing this issue?

— Reply to this email directly, view it on GitHub https://github.com/Pyomo/mpi-sppy/issues/255, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD4VTBBFCKI73ULQD7QCFFDVONT2BANCNFSM5YOGLCCQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>

bknueven commented 2 years ago

Since you're using Egret for your problem formulation, it might also be useful if you can provide some information with regards to that.

In particular, I see these lines:

Scen2
Calculating PTDF Matrix Factorization

which suggests you're also using the lazy-PTDF capability of Egret, which gets complicated within a stochastic optimization problem.

As an aside, you might also upgrade to the latest version of Egret, 0.5.2: https://pypi.org/project/gridx-egret/

brentertainer commented 2 years ago

First, let me just mention that the log I shared with you is from my Windows system. I experience the same issue on a RHEL7 system with OpenMPI. To keep matters simple, I will only use logs from the RHEL system moving forward to avoid confusion.

It's also worth mentioning that I started with serial versions of EF and PH, and those codes are in the attached zip file. For all of serial EF, serial PH, and parallel PH, I observe the solver converging to the same objective value (~31.7153).

@DLWoodruff : Yes, I am using vanilla and baseparsers. I have attached a zip of the files from my project that you need to reproduce the issue. The file run.sh is what I use to invoke the Python script with a few baseparser command line options. Inside main_ph_par.py, you will see I am also trying to use the NormRhoConverger and NormRhoUpdater.

@bknueven : On the RHEL system, I installed Egret from the main branch of the project. This is evidenced in the attached log. Otherwise, everything you mentioned is accurate. I am unsure what you mean by "complicated" here -- are you alluding to the burden of having to calculate the PTDF matrix for each (scenario, spoke) pair? If not, please elaborate a bit as I am curious to know how you mean it.

Project Files: three-stage-winterization.zip

Log from RHEL Machine: winterization.log

brentertainer commented 2 years ago

@bknueven The PTDF model is new to me and I did not realize I was using a "lazy" implementation of the model until I started looking into parts of your comment that I was not grasping. I understand the issues with my implementation more clearly now.

I am trying solve a three-stage model. The first two stages are for making long-term and short-term grid hardening decisions in the face of climate- and weather-related uncertainty, respectively. The final stage is to solve a DCOPF problem in which the uncertain parameters are generator capacities and bus loads. We apply the CVaR risk metric over the first-stage uncertainty and expected value over the second-stage uncertainty. The linearization of CVaR leads to there being a constraint on an expected value over third-stage loss. So that we can use progressive hedging, we have implemented our model in mpi-sppy as a two-stage model. The second-stage problem involves solving multiple DCOPF instances, each with the same load profile but different generator capacities.

My understanding of the issues using EGRET's lazy PTDF with mpi-sppy is as follows. First, we are probably not doing anything to iteratively activate the violated lazy constraints in each DCOPF instance. (I say probably because I am not 100% sure how those constraints would be added even in a deterministic model.) Second, even if we are activating violated constraints, we are only adding them to the OPF instance in which they are violated and not to all OPF instances. It is not clear to me why or even if we need to do this, but I see that you are doing it in the diff you wrote for #134. That development was completed some time ago but never merged. Is a code review the only thing keeping that issue from being closed?

bknueven commented 2 years ago

@bknueven The PTDF model is new to me and I did not realize I was using a "lazy" implementation of the model until I started looking into parts of your comment that I was not grasping. I understand the issues with my implementation more clearly now.

The most straightforward thing to do would be to use Egret's B-theta DCOPF implementation. The code in #134 is customized for the unit commitment example in mpi-sppy and would need to be modified for your problem, but does provide a sketch of how you would do it.

Overall, unless you see a huge performance difference on the "deterministic equivalent" using PTDFs, I would stick to B-theta.