Closed pstjohn closed 6 years ago
Definitely unintended behavior. Is there any particular reason, though, why you are removing the reaction rather than knocking it out? I'm fairly sure that knock-outs are reverted as intended.
Indeed, knockouts are handled fine. This was more just an easy example to show the issue. For me, it's because I'm passing the model to a visualization library and don't want the reactions to show up :). But more generally, if its a feature we want to support we should probably make sure it works correctly. I wonder if it was a recent break or if contexted model additions / subtractions never got added back to the solver?
So the context manager is adding the variables back to model.solver.variables...
In [4]: id(model.reactions.PDH.forward_variable)
Out[4]: 4703595688
In [8]: id(model.solver.variables[-2])
Out[8]: 4703595688
Where else could the disconnect be happening?
My first intuition was the error would be due to optlang pending changes but each optimization should first call update on the model so that should in principle take care of it. Still worth investigating I assume. @KristianJensen any idea?
The problem is that the when the variable is deleted it is also removed from all constraints where it participates so you would also have to reset the coefficients for each constraint that includes that variable.
In [54]: import cobra.test
...: model = cobra.test.create_test_model('textbook')
...:
In [55]: model.constraints[16].expression
Out[55]: -1.0*ACKr + 1.0*ACKr_reverse_b49c0 - 1.0*ADK1 + 1.0*ADK1_reverse_a6f90 - 1.0*ATPM + 1.0*ATPM_reverse_5b752 + 1.0*ATPS4r - 1.0*ATPS4r_reverse_64306 - 59.81*Biomass_Ecoli_core + 59.81*Biomass_Ecoli_core_reverse_2cdba - 1.0*GLNS + 1.0*GLNS_reverse_59581 - 1.0*GLNabc + 1.0*GLNabc_reverse_1d82a - 1.0*PFK + 1.0*PFK_reverse_d24a6 - 1.0*PGK + 1.0*PGK_reverse_02696 - 1.0*PPCK + 1.0*PPCK_reverse_2557d - 1.0*PPS + 1.0*PPS_reverse_1c319 + 1.0*PYK - 1.0*PYK_reverse_bc8ff - 1.0*SUCOAS + 1.0*SUCOAS_reverse_22958
In [56]: with model:
...: model.reactions.ACKr.remove_from_model()
...: model.optimize()
...:
In [57]: model.constraints[16].expression
Out[57]: -1.0*ADK1 + 1.0*ADK1_reverse_a6f90 - 1.0*ATPM + 1.0*ATPM_reverse_5b752 + 1.0*ATPS4r - 1.0*ATPS4r_reverse_64306 - 59.81*Biomass_Ecoli_core + 59.81*Biomass_Ecoli_core_reverse_2cdba - 1.0*GLNS + 1.0*GLNS_reverse_59581 - 1.0*GLNabc + 1.0*GLNabc_reverse_1d82a - 1.0*PFK + 1.0*PFK_reverse_d24a6 - 1.0*PGK + 1.0*PGK_reverse_02696 - 1.0*PPCK + 1.0*PPCK_reverse_2557d - 1.0*PPS + 1.0*PPS_reverse_1c319 + 1.0*PYK - 1.0*PYK_reverse_bc8ff - 1.0*SUCOAS + 1.0*SUCOAS_reverse_22958
Should be fixable by calling model._populate_solver
with only the reaction when readding.
so you would also have to reset the coefficients for each constraint that includes that variable
Is this something we should enable by default upon leaving the context? Seems to me that would be the expected behavior.
For that particular modification yes. Will need a slight adjustment to _populate_solver
so it does not add already existing variables. But all in all not difficult to fix I think...
So making that modification to _populate_solver, it errors out on optimize
, since solver.update appears to try to add the variables again when they're already there...
When removing a reaction from a model inside a context, it appears the reaction is added back to the model but the underlying solver objects are mangled.