Closed earmingol closed 7 months ago
Base: 84.40% // Head: 83.85% // Decreases project coverage by -0.55%
:warning:
Coverage data is based on head (
c669604
) compared to base (7473d13
). Patch coverage: 100.00% of modified lines in pull request are covered.:exclamation: Current head c669604 differs from pull request most recent head fef1f5d. Consider uploading reports for the commit fef1f5d to get more accurate results
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
Thanks for the comments. I wonder if there is a faster way to get the flux values from the solution? All of our benchmarkings were done by using guroby directly to solve the LPs so the access to the results was quicker. In this repo we have the original code for that https://github.com/DKenefake/fasterfva
I did try benchmarking with a couple of larger models, and the algorithm was faster for those.
Hmm I can't reproduce that even with larger models. I tried a bunch from your preprint but they all ran much faster on the old implementation. For instance iJO1366 took 5min with the code from this PR for me and 20secs on the old implementation. Getting the primal solution is already done with pure C code so I doubt you could speed it up faster. I recently added some optimizations for Gurobi to optlang but there is some intrinsic tradeoff when getting many primal values, especially by name. Can you give some more info how you run your benchmarks like OS and software versions?
Sorry, I meant if there is a faster way to gather the fluxes from the solver through COBRApy (for example accessing directly to a vector with all values, so I avoid the iterations). If you see the other repository, the benchmarking of the preprint was performed directly through Gurobi instead of COBRApy, allowing the access to the solutions as vectors. This better reflects the theoretical performance of the algorithm.
For the screenshot I put I am using a Macbook, with the Catalina MacOS. However, I just realized if I don't specify the processes
parameters in the old implementation it automatically initializes all available workers –which looks like it takes a while– explaining why I get the opposite to what you get.
I think the slowest part of this COBRApy implementation of our algorithm is gathering fluxes from the Model class through iterations, so I'll try to improve that either by using more efficient ways to do that or implementing new functions to gather info directly from the solver. I would appreciate any suggestion on how doing that more efficiently.
Thanks again!
Oh yeah process spawning on Mac and Win is very slow.
Like I said I recently pushed some optimizations to optlang that makes this faster for Gurobi. If you install optlang from the GitHub master branch you will get those optimizations. There is about a 10x speed up just in there. If you run the benchmarks in the linked repo against that one the cobrapy version should perform better. For instance iJO1366 with Gurobi takes about 8 seconds on my laptop with that one (single core). But to answer your question the fastest way to get the primal values in cobrapy (in standard form) is to use model.solver.primal_values
which will always use the fastest available strategy for each solver. However, this will still be slower than solving from a recycled version and only getting the objective value.
Sorry, my bad. The Gurobi optimizations mostly improved getting primal values etc. This shouldn't affect the previous FVA implementation, so your previous benchmarks should not change much unless you used a very outdated optlang or something.
Hi, closing this for now since it has diverged too much. Feel free to send a new PR.
Implemented a new algorithm for flux variability analysis that is faster than the regular algorithm. It is faster because is designed to solve less numbers of LPs.
A full description of this algorithm and benchmarking is available in its pre-print paper