Robbybp / surrogate-vs-implicit

Comparing surrogate models and implicit function formulations for chemical process models
Other
1 stars 0 forks source link

[WIP] Start trying to explain convergence failures #22

Open Robbybp opened 7 months ago

Robbybp commented 7 months ago

I initially intended to compute condition numbers of the Jacobian and KKT matrix, but I got distracted by computing errors in surrogate models. With this branch, we can produce a new "surrogate error" file with:

python run_alamo_sweep.py --compute-surrogate-error
python run_nn_sweep.py --compute-surrogate-error

This option computes the maximum relative error between a simulation of the surrogate model and a simulation of full-space reactor model at the point where optimization problem stops (whether or not it converged successfully). Using this option, I can get the following plots:

The transparent blocks in these grids (e.g. outside the training region with the NN) correspond to cases where either the full-space reactor model or the reactor surrogate fail to simulate. These would be worth investigating further.

sbugosen commented 7 months ago

@Robbybp I tested 4 successful instances for every formulation. Plot shown below. I'll push the script soon. The neural network is showing a similar behavior to that of the full space formulation, which is kind of unfortunate and unexpected. I'm trying to find an explanation for this. The nn formulation has the largest number of variables and constraints, could this be a starting point for an explanation?

Maybe we have high condition number in nn due to: function is not as smooth as the alamo polynomials + higher chance for numerical instability due to number of parameters + overtrained nn provides divergent values for small variations in the input?

conditionnums

sbugosen commented 7 months ago

The divergence for Subset 1 occurs at iteration 13 of the full space formulation solve. Highest residual occurs for the bypass_rejoin enthalpy mixing equations. Though I would like to do the sophisticated analysis on making the Jacobian a square matrix, finding its block triangular form, and calculate the condition number for each resulting block to identify the culprit. I'll try to do this.

Robbybp commented 7 months ago

Really nice results, thanks. I'm impressed that the ALAMO and implicit function condition numbers track well.

The neural network is showing a similar behavior to that of the full space formulation, which is kind of unfortunate and unexpected.

I don't think this is unfortunate or unexpected. It's just interesting. It's interesting that the NN formulation sometimes tracks the full-space, and sometimes doesn't.

Can you plot this for a case where full-space, NN, and implicit fail?

sbugosen commented 7 months ago

Can you plot this for a case where full-space, NN, and implicit fail?

These are the plots for 4 unsuccessful instances. The implicit function failed due to eval errors, that's why the curve for this formulation stops early. Notice now the magnitude of the condition number. Tomorrow I'll get started on block triangularization of the Jacobian.

conditionnumunsuccesssful

Robbybp commented 7 months ago

Some nicer plots for surrogate error:

Robbybp commented 7 months ago

These are the plots for 4 unsuccessful instances. The implicit function failed due to eval errors, that's why the curve for this formulation stops early. Notice now the magnitude of the condition number.

Thanks. From these plots, it seems like we jump into a "bad region" of high condition number, then never recover. I wonder if there is some difference between the jump that occurs around iteration 15 for the failing vs. successful cases.

sbugosen commented 7 months ago

Hi Robert, I'm running this code:

P = 1947379.0 
X = 0.94
m = make_optimization_model(X, P, initialize=True)
solver = config.get_optimization_solver(iters=50)
results = solver.solve(m, tee=True)
m.fs.REF_OUT_expanded.pressure_equality[0].deactivate()
nlp = PyomoNLP(m)
jac = nlp.evaluate_jacobian()
row_partition, col_partition = triangularize.block_triangularize(jac)

The output for col_partition (similar output to row_partition) is:

[18, 20, 9, 22, 23, 24, 25, 26, 15, 28, 29, 32, 46, 44, 48, 49, 50, 51, 52, 53, 54, 55, 73, 75, 100, 77, 78, 79, 80, 81, 82, 83, 84, 124, 140, 129, 142, 143, 144, 145, 146, 138, 148, 149, 150, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 367, 381, 346, 383, 379, 385, 386, 387, 352, 389, 390, 411, 831, 830, 832, 404, 405, 406, 407, 409, 424, 429, 431, 414, 435, 421, 400, 401, 402, 403, 415, 416, 417, 418, 420, 466, 467, 444, 469, 470, 471, 472, 473, 450, 475, 476, 6, 7, 8, 529, 10, 11, 12, 13, 14, 535, 16, 17, 539, 550, 19, 540, 21, 542, 543, 544, 545, 546, 27, 548, 549, 565, 576, 45, 566, 47, 568, 569, 570, 571, 572, 573, 574, 575, 122, 98, 99, 584, 101, 102, 103, 104, 105, 106, 107, 108, 593, 604, 74, 594, 76, 596, 597, 598, 599, 600, 601, 602, 603, 125, 126, 127, 128, 627, 130, 131, 132, 133, 134, 135, 136, 137, 637, 648, 139, 638, 141, 640, 641, 642, 643, 644, 147, 646, 647, 650, 661, 151, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 343, 344, 345, 792, 347, 348, 349, 350, 351, 798, 353, 354, 817, 828, 380, 818, 382, 820, 384, 822, 823, 824, 388, 826, 827, 426, 427, 428, 835, 430, 423, 432, 433, 434, 840, 436, 437, 844, 438, 843, 412, 413, 846, 425, 847, 848, 849, 419, 851, 852, 441, 442, 443, 861, 445, 446, 447, 448, 449, 867, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 886, 884, 885, 468, 887, 888, 889, 890, 891, 893, 894, 581, 582, 583, 567, 585, 586, 587, 588, 589, 590, 591, 592, 624, 625, 626, 595, 628, 629, 630, 631, 632, 633, 634, 635, 789, 790, 791, 639, 793, 794, 795, 796, 797, 645, 799, 800, 422, 833, 834, 819, 836, 821, 837, 838, 839, 825, 841, 842, 526, 527, 528, 845, 530, 531, 532, 533, 534, 850, 536, 537, 858, 859, 860, 541, 862, 863, 864, 865, 866, 547, 868, 869, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882], [485], [513], [487], [515], [488], [516], [489], [517], [490], [518], [491], [519], [410], [494], [522], [493], [521], [408], [61, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 606, 62, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616], [87], [86], [88], ...

How can I reconstruct the block triangular form of the Jacobin matrix with this information?

sbugosen commented 7 months ago

An example plot of the condition number heat map. It is correlated with the unsuccessful instances mostly.

Screenshot 2024-04-29 at 12 26 05 PM
Robbybp commented 7 months ago

Typically, I would get the condition number of diagonal blocks with the following:

results = solver.solve(m, tee=True)
# FIX DEGREES OF FREEDOM HERE!
nlp = PyomoNLP(m)
igraph = IncidenceGraphInterface(m, include_inequality=False)
vblocks, cblocks = igraph.block_triangularize()
submatrices = [
    nlp.extract_submatrix_jacobian(vb, cb) for vb, cb in zip(vblocks, cblocks)
]
for i, submat in enumerate(submatrices):
    cond = np.linalg.cond(submat.toarray())
    # Print or store condition number as necessary

Hope this helps.

In your particular example, I'm a little confused about why block_triangularize is working, when you seemingly haven't fixed any degrees of freedom. I assume you just left this out of the example? I also don't know why you would have to deactivate any equations.

Robbybp commented 7 months ago

An example plot of the condition number heat map. It is correlated with the unsuccessful instances mostly.

Thanks! Is there a way to make the color bar scale logarithmic?

sbugosen commented 7 months ago

@Robbybp

Can you run the following commands please?

python cond_run_fullspace_sweep.py

python plot_condition_num_grid.py \          
                data/fullspace-sweep.csv \
                --title="Full-space"
sbugosen commented 7 months ago

Typically, I would get the condition number of diagonal blocks with the following:

results = solver.solve(m, tee=True)
# FIX DEGREES OF FREEDOM HERE!
nlp = PyomoNLP(m)
igraph = IncidenceGraphInterface(m, include_inequality=False)
vblocks, cblocks = igraph.block_triangularize()
submatrices = [
    nlp.extract_submatrix_jacobian(vb, cb) for vb, cb in zip(vblocks, cblocks)
]
for i, submat in enumerate(submatrices):
    cond = np.linalg.cond(submat.toarray())
    # Print or store condition number as necessary

Hope this helps.

In your particular example, I'm a little confused about why block_triangularize is working, when you seemingly haven't fixed any degrees of freedom. I assume you just left this out of the example? I also don't know why you would have to deactivate any equations.

No, the code worked. I stopped the full space solve at iteration 50, and had to deactivate the pressure equality to have 898 variables and 898 constraints. I did not have to fix any degree of freedom after stopping the solve. Thank you!! I'll try that code.

Robbybp commented 7 months ago

898 variables and 898 constraints

Shouldn't we have 895 constraints and variables after fixing degrees of freedom in the full-space model?

sbugosen commented 7 months ago

898 variables and 898 constraints

Shouldn't we have 895 constraints and variables after fixing degrees of freedom in the full-space model?

I see what happened. No, I did not fix any degree of freedom. That's why I have 898 variables in that Jacobian. I had 899 constraints (895 equalities, 4 inequalities), and I mistakenly deactivated the pressure equality to make them match. I forgot that the Jacobian build only needed the 895 equality constraints, which in turn would need 895 variables only, with our 3 DOF fixed. Thanks.

Robbybp commented 7 months ago

Just tested the condition number parameter sweep plot with your latest modifications. Looks great! Very good agreement between unsuccessful instances and high condition numbers.

sbugosen commented 7 months ago
    if args.optimize:
        P = 1947379.0 
        X = 0.94
        m = make_optimization_model(X, P, initialize=True)
        solver = config.get_optimization_solver(iters=35)
        print(X,P)
        results = solver.solve(m, tee=True)
        m.fs.reformer_bypass.split_fraction[0, "bypass_outlet"].fix(m.fs.reformer_bypass.split_fraction[0, "bypass_outlet"].value)
        m.fs.reformer_mix.steam_inlet.flow_mol.fix(m.fs.reformer_mix.steam_inlet.flow_mol[0].value)
        m.fs.feed.outlet.flow_mol.fix(m.fs.feed.outlet.flow_mol[0].value)
        nlp = PyomoNLP(m)
        igraph = IncidenceGraphInterface(m, include_inequality=False)
        vblocks, cblocks = igraph.block_triangularize()

        for i, (vblock, cblock) in enumerate(zip(vblocks, cblocks)):
            submatrix = nlp.extract_submatrix_jacobian(vblock, cblock)
            cond = np.linalg.cond(submatrix.toarray())
            if cond > 1e10:
                print(f"block {i}: {cond}")
                for var in vblock:
                   print(f"Variable:  {var.name}")
                for con in cblock:
                   print(f"Constraint:  {con.name}")

Just did this test for the full space formulation, and got these results:

block 382: 93982054238.5497
Variable:  fs.reformer.control_volume.properties_out[0.0].temperature
Variable:  fs.reformer.lagrange_mult[0.0,C]
Variable:  fs.reformer.lagrange_mult[0.0,H]
Variable:  fs.reformer.lagrange_mult[0.0,O]
Variable:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp[Vap,CH4]
Variable:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp[Vap,C2H6]
Variable:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp[Vap,C3H8]
Variable:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp[Vap,C4H10]
Variable:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp[Vap,O2]
Variable:  fs.reformer.control_volume.properties_out[0.0].flow_mol
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_phase_comp[Vap,H2O]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_phase_comp[Vap,CO2]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_phase_comp[Vap,CO]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_phase_comp[Vap,N2]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_phase_comp[Vap,Ar]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_comp[H2]
Variable:  fs.reformer.control_volume.properties_out[0.0].flow_mol_phase[Vap]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_phase_comp[Vap,H2]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_comp[CO]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_comp[H2O]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_comp[CO2]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_comp[CH4]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_comp[C2H6]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_comp[C3H8]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_comp[C4H10]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_comp[N2]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_comp[O2]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_comp[Ar]
Variable:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp[Vap,H2]
Variable:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp[Vap,CO]
Variable:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp[Vap,H2O]
Variable:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp[Vap,CO2]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_phase_comp[Vap,CH4]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_phase_comp[Vap,C2H6]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_phase_comp[Vap,C3H8]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_phase_comp[Vap,C4H10]
Variable:  fs.reformer.control_volume.properties_out[0.0].mole_frac_phase_comp[Vap,O2]
Constraint:  fs.reformer.gibbs_minimization[0.0,Vap,H2]
Constraint:  fs.reformer.gibbs_minimization[0.0,Vap,CO]
Constraint:  fs.reformer.gibbs_minimization[0.0,Vap,H2O]
Constraint:  fs.reformer.gibbs_minimization[0.0,Vap,CO2]
Constraint:  fs.reformer.gibbs_minimization[0.0,Vap,CH4]
Constraint:  fs.reformer.gibbs_minimization[0.0,Vap,C2H6]
Constraint:  fs.reformer.gibbs_minimization[0.0,Vap,C3H8]
Constraint:  fs.reformer.gibbs_minimization[0.0,Vap,C4H10]
Constraint:  fs.reformer.gibbs_minimization[0.0,Vap,O2]
Constraint:  fs.reformer.conv_constraint
Constraint:  fs.reformer.control_volume.element_balances[0.0,H]
Constraint:  fs.reformer.control_volume.element_balances[0.0,C]
Constraint:  fs.reformer.control_volume.element_balances[0.0,O]
Constraint:  fs.reformer.control_volume.element_balances[0.0,N]
Constraint:  fs.reformer.control_volume.element_balances[0.0,Ar]
Constraint:  fs.reformer.control_volume.properties_out[0.0].sum_mole_frac_out
Constraint:  fs.reformer.control_volume.properties_out[0.0].total_flow_balance
Constraint:  fs.reformer.control_volume.properties_out[0.0].component_flow_balances[H2]
Constraint:  fs.reformer.control_volume.properties_out[0.0].component_flow_balances[CO]
Constraint:  fs.reformer.control_volume.properties_out[0.0].component_flow_balances[H2O]
Constraint:  fs.reformer.control_volume.properties_out[0.0].component_flow_balances[CO2]
Constraint:  fs.reformer.control_volume.properties_out[0.0].component_flow_balances[CH4]
Constraint:  fs.reformer.control_volume.properties_out[0.0].component_flow_balances[C2H6]
Constraint:  fs.reformer.control_volume.properties_out[0.0].component_flow_balances[C3H8]
Constraint:  fs.reformer.control_volume.properties_out[0.0].component_flow_balances[C4H10]
Constraint:  fs.reformer.control_volume.properties_out[0.0].component_flow_balances[N2]
Constraint:  fs.reformer.control_volume.properties_out[0.0].component_flow_balances[O2]
Constraint:  fs.reformer.control_volume.properties_out[0.0].component_flow_balances[Ar]
Constraint:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp_eqn[Vap,H2]
Constraint:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp_eqn[Vap,CO]
Constraint:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp_eqn[Vap,H2O]
Constraint:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp_eqn[Vap,CO2]
Constraint:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp_eqn[Vap,CH4]
Constraint:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp_eqn[Vap,C2H6]
Constraint:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp_eqn[Vap,C3H8]
Constraint:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp_eqn[Vap,C4H10]
Constraint:  fs.reformer.control_volume.properties_out[0.0].log_mole_frac_phase_comp_eqn[Vap,O2]
block 524: 136900850220.33492
Variable:  fs.reformer_recuperator.delta_temperature_in[0.0]
Variable:  fs.reformer_recuperator.hot_side.properties_out[0.0].temperature
Variable:  fs.reformer_recuperator.cold_side.heat[0.0]
Variable:  fs.reformer_recuperator.delta_temperature_out[0.0]
Variable:  fs.reformer_recuperator.hot_side.heat[0.0]
Variable:  fs.reformer_recuperator.cold_side.properties_out[0.0].temperature
Constraint:  fs.reformer_recuperator.delta_temperature_in_equation[0.0]
Constraint:  fs.reformer_recuperator.delta_temperature_out_equation[0.0]
Constraint:  fs.reformer_recuperator.unit_heat_balance[0.0]
Constraint:  fs.reformer_recuperator.heat_transfer_equation[0.0]
Constraint:  fs.reformer_recuperator.hot_side.enthalpy_balances[0.0]
Constraint:  fs.reformer_recuperator.cold_side.enthalpy_balances[0.0]

Are you aware of any formal theorem that might say something like "The condition number of a block matrix is at least the largest condition number of one its blocks"? Or something similar? I wonder if there is formal theory that might help explain these results even more.

Also, would applying this procedure be a good method to identify what unit operation we have to replace with surrogates/implicit functions? Simply see at every iteration what block is giving the highest condition number?

Robbybp commented 7 months ago

Are you aware of any formal theorem that might say something like "The condition number of a block matrix is at least the largest condition number of one its blocks"?

I thought this would be fairly straightforward, but tried for a little bit and didn't come up with anything. This post https://mathoverflow.net/questions/265887/bounding-the-minimum-singular-value-of-a-block-triangular-matrix provides bounds on the minimum singular value, but they involve the off-diagonal block. It looks like, while the determinant of a block triangular matrix can easily be characterized in terms of the diagonal blocks, the singular values cannot.

sbugosen commented 7 months ago

I decided to write a very simple code to test "The condition number of a block matrix is at least the largest condition number of one its blocks":

import numpy as np
M=3
A11=np.random.rand(M,M) # build a 3x3 matrix
A12=np.random.rand(M,M)
A21=np.random.rand(M,M)
A22=np.random.rand(M,M)

Block = np.block([[A11, A12], [A21, A22]])
print(Block)
print(Block.shape)
cond_num = np.linalg.cond(Block)
cond_numA11 = np.linalg.cond(A11)
cond_numA12 = np.linalg.cond(A12)
cond_numA21 = np.linalg.cond(A21)
cond_numA22 = np.linalg.cond(A22)
print("The condition number of matrix Block is:", cond_num)
print("The condition number of matrix A11 is:", cond_numA11)
print("The condition number of matrix A12 is:", cond_numA12)
print("The condition number of matrix A21 is:", cond_numA21)
print("The condition number of matrix A22 is:", cond_numA22)

Sometimes, I get the answer that we expect:

The condition number of matrix Block is: 213.14360734007445
The condition number of matrix A11 is: 6.360188101019681
The condition number of matrix A12 is: 6.426901717240788
The condition number of matrix A21 is: 15.471177505088507
The condition number of matrix A22 is: 9.953369891123037

And sometimes, our hypothesis gets disproven:

The condition number of matrix Block is: 22.066126973314187
The condition number of matrix A11 is: 29.265897654504663
The condition number of matrix A12 is: 5.449264955011919
The condition number of matrix A21 is: 14.492652122183436
The condition number of matrix A22 is: 8.997115814664989

Though I wonder if the hypothesis gets proven if I build the Block matrix as a block triangular matrix in the first place. I'll try this., and keep searching for theory.

Robbybp commented 7 months ago

Also, would applying this procedure be a good method to identify what unit operation we have to replace with surrogates/implicit functions? Simply see at every iteration what block is giving the highest condition number?

This seems like a good idea if a small number of blocks have disproportionately high condition numbers.

Robbybp commented 7 months ago

"The condition number of a block matrix is at least the largest condition number of one its blocks"

I'm not sure we need to prove this, as we know that in our particular case the full Jacobian has a huge condition number. (As an aside, did you test the condition number of the full-model square Jacobian, with degrees of freedom fixed?) We are most interested in explaining the jump in condition number that we observe in the full-space model. We now have some idea why the condition number is so large. I would now be most interested to see which sub-block condition numbers (if any) increase significantly when we observe the increase in overall condition number.

sbugosen commented 7 months ago

I would now be most interested to see which sub-block condition numbers (if any) increase significantly when we observe the increase in overall condition number.

For successful instance Subset 1, X=0.90, P = 1.94 MPa, the jump occurs at iteration 13. The biggest change in condition number is:

Iteration 12 - Block 524 - Reformer Recuperator - Condition number = 3e10 Iteration 13 - Block 524 - Reformer Recuperator - Condition number = 4e11

In this particular instance, the condition number of the reformer decreases, though the change is not as significant as the change in condition number for the recuperator:

Iteration 12 - Block 382 - Reformer - Condition number = 1e11 Iteration 13 - Block 382 - Reformer - Condition number = 8e10

I'll see if it's the same case for the other successful instances.

Robbybp commented 7 months ago

Iteration 12 - Block 524 - Reformer Recuperator - Condition number = 3e10 Iteration 13 - Block 524 - Reformer Recuperator - Condition number = 4e11

Do you know in any other blocks show an increase in condition number?

sbugosen commented 7 months ago

Do you know in any other blocks show an increase in condition number?

For this instance, the only increase is in the reformer recuperator. I'm currently writing a script to get these results.

sbugosen commented 7 months ago

These are the graphs for 4 successful instances, square Jacobian. They look very similar to the graphs for the rectangular Jacobian. Could you please remind me why it was important to see the difference between these two?v Thank you.

Screenshot 2024-04-30 at 8 17 03 PM
Robbybp commented 7 months ago

Could you please remind me why it was important to see the difference between these two

The block triangular decomposition only applies to the square Jacobian, so if there was a direct link between condition number of the full matrix and the diagonal blocks, it would be important to use the square Jacobian in that comparison. And if this matrix exhibited different behavior than the rectangular Jacobian, it would have been an indicator that we shouldn't be looking at the block triangularization to try to explain this jump.

sbugosen commented 7 months ago

Could you please remind me why it was important to see the difference between these two

The block triangular decomposition only applies to the square Jacobian, so if there was a direct link between condition number of the full matrix and the diagonal blocks, it would be important to use the square Jacobian in that comparison. And if this matrix exhibited different behavior than the rectangular Jacobian, it would have been an indicator that we shouldn't be looking at the block triangularization to try to explain this jump.

I understand. I will check if the jump for Subset 1 is still occurring at iteration 12. I'll make sure of this before moving on to see variable values.

sbugosen commented 7 months ago

RECAP for Subset 1 (successful instance): The jump still happens at iteration 12 when I evaluate the condition number for the square Jacobian, and block 524, corresponding to the reformer recuperator, has the only increase in condition number from iteration 12 to iteration 13.

The variable values associated to block 524 for full-space, implicit, and optimality are shown below:

image

I was expecting a larger difference for the full space variable values. I assume that small variations in the matrix causes very large variations for its inverse if its close to singularity right?

Robbybp commented 7 months ago

I was expecting a larger difference for the full space variable values. I assume that small variations in the matrix causes very large variations for its inverse if its close to singularity right?

This could be the case. Are these just the variables "in the block", or the "input variables" as well (those that participate in the equations in the block, but aren't in the block)?

sbugosen commented 7 months ago

This could be the case. Are these just the variables "in the block", or the "input variables" as well (those that participate in the equations in the block, but aren't in the block)?

These are only the variables in the block. How do I extract the variables that participate in the block equations? Using identify_variables() for every constraint in the block?

Robbybp commented 7 months ago

How do I extract the variables that participate in the block equations? Using identify_variables() for every constraint in the block?

That's one way, but here is something that might be slightly easier:

i = 524
from pyomo.util.subsystems import create_subsystem_block
block = create_subsystem_block(cblocks[i], vblocks[i])
# block.cons is an IndexedConstraint containing constraints in cblocks[i]
# block.vars is an IndexedVar containing the variables in vblocks[i]
# block.input_vars is an IndexedVar containing the other variables in these constraints

This just runs identify_variables under the hood, but I have done this enough times that I made a utility function for it.

sbugosen commented 7 months ago

FULL-SPACE: Below is a table showing what variables associated with block 524 present a change larger than 50% when going from iteration 12 to iteration 13.

vars_jump_more_than_50pt.csv

Basically, the reformer is making the recuperator become ill-conditioned.

Robbybp commented 7 months ago

And it looks like the ill-conditioning is happening because we jump to a region where the sums of mole fractions are significantly greater than 1. Do you know if these mole fractions look better in the implicit model?

sbugosen commented 7 months ago

On it. But then, shouldn't the ill-conditioning be happening in the reformer block, and not the recuperator? Well, they both have very high condition numbers either way..., but the one for the recuperator is higher.

Robbybp commented 7 months ago

For one, not all of these variables participate in the reformer, right? For those that do (hot side inlet), I don't have a good intuition for why they don't make the reformer poorly conditioned. Maybe they are linked to the reformer via trivial equality constraints (expanded arcs), which are violated, so the reformer outlet mole fractions actually aren't this bad?

sbugosen commented 7 months ago

IMPLICIT: Below is a table showing variables associated with block 524, iteration 12 to iteration 13. Ignore the table title.

Filter the table for:

fs.reformer_recuperator.hot_side.properties_in[0.0].mole_frac_phase_comp[Vap,{insert component}]

vars_jump_more_than_50pt.csv

sbugosen commented 7 months ago

so the reformer outlet mole fractions actually aren't this bad?

Yes, you are correct. The equalities that link the outlet of the reformer with the inlet of the recuperator are indeed violated.

Robbybp commented 7 months ago

Great! That's a really good explanation. Of course, this raises the question of why the full-space model is taking a step that increases all these mole fractions so much. We could try to explain this jump in terms of all the different infeasibilities at iteration 12, but that would require constructing and factorizing the KKT matrix, then performing a bunch of back-solves.

sbugosen commented 7 months ago

I pushed the script identify_vars.py, which I used to get those tables. To make it easier to use I have to add cmd line arguments. In progress.

but that would require constructing and factorizing the KKT matrix

I'll proceed to try and build the KKT matrix as a first step, just as a personal homework.

sbugosen commented 6 months ago

To run the full space sweep with condition number calculation do:

python run_fullspace_sweep.py --calc_condition_number, else, python run_fullspace_sweep.py

To plot the condition number grid do:

python plot_condition_num_grid.py data/fullspace-sweep.csv

Robbybp commented 6 months ago

Before merging this PR, I'd like to separate data generation from plotting.

This is a fairly significant change to this PR, so I'm not in a hurry to get this merged. If I were you, I'd focus on constructing the KKT matrix and collecting useful information from it for now.

sbugosen commented 6 months ago

To get condition number vs iteration plots:

For unsuccessful instances: python condition_num_plots.py --unsuccessful For successful instances: python condition_num_plots.py

sbugosen commented 6 months ago

If I were you, I'd focus on constructing the KKT matrix and collecting useful information from it for now.

Sounds good.