Open Mathemalsky opened 1 month ago
Thanks, I'll have a look at where they occur.
Note that we did remove one memory leak in going to v1.7.2
Thanks for quick response and advice. I upgraded to v1.7.2, but the leak persists.
The problem occurs when using the pdlp solver introduced in v1.7.0.
I looked at few memory leaks and they are allocated using cupdlp_init_*
(in solveLpCupdlp
and formulateLP_highs
), but i did not found any code that frees the memory after usage.
A quick fix could be to call free
(or cupdlp_dcs_free?
) for each variable before returning from solveLpCupdlp
, however the problem might happen again if we add a premature return somewhere without paying attention. Furthermore, there are still other memory leaks not so trivial to fix, as it is not clear who owns the objects and when it's OK to call free (needs more analysis).
Is there something that prevents this code from taking advantage of RAII in c++ (e.g: replacing raw arrays by std::vector
, ...) ? if not, I can work on a more robust fix on my spare time.
That's very helpful. We didn't write cuPDLP-C, so I guess we should have checked it for memory leaks.
Note that since cu-PDLP-C is using only a CPU implementation, its performance is unlikely to be competitive with the interior point or dual simplex solver.
Note that since cu-PDLP-C is using only a CPU implementation, its performance is unlikely to be competitive with the interior point or dual simplex solver.
In fact that was what I expected when I looked into the code and saw all these cu-prefixes in the filenames for my machine has no dedicated GPU. But I tested and the result was quite surprising: solver | pdlp | ipm | simplex |
---|---|---|---|
time | < 0.5s | ~ 10s | > 10 min |
My test instance is the LP-relaxation of a 0-1 ILP and the test is performed on a machine without dedicated GPU.
What are the requirements to run pdlp on a GPU (besides the existence of a GPU)?
By the way it would be nice and helpful to have a rough explanation of the solver in the documentation with details like which anti cycling rules are in use in the simplex method or even pseudocode or linked publications that describe the methods.
This is very interesting. I've yet to see an instance where PDLP is meaningfully faster - not that I've done any systematic benchmarking.
At the moment it's not possible to run PDLP on a GPU via HiGHS. Adapting the build so that it recognises a suitable GPU - I needs to be by Nvidia and sufficiently up-to-date - is WIP.
The paper referenced on the HiGHS website explains the simplex solver, but I can add the IPM reference. Meaningful pseudo-code would be too hard to write
By the way it would be nice and helpful to have a rough explanation of the solver in the documentation with details like which anti cycling rules are in use in the simplex method or even pseudocode or linked publications that describe the methods.
I've added the IPM reference and comments on the lack of references for MIP and QP
My test instance is the LP-relaxation of a 0-1 ILP and the test is performed on a machine without dedicated GPU.
Could you share the instance with me?
Yes sure. Is there a neat way to share the HighsLP
instance I'm using? If not I can provide the members of HighsLP
which are std::vectors
as comma or space seperated values.
Thanks: the better way is to write the instance as an MPS file by calling writeModel("foo. mps")
Ok that should have worked. Here it is: instance.zip
Thanks. I've reproduced your observations.
Dear HiGHs Team! I'm using your software for solving a MILP and its LP relaxiation. While doing so I came across some memory leaks which are according to the leakSanitizer in clang and gcc originated in libhighs.so.
My setting: I'm using HiGHs 1.7.0 (git hash: 13363c9f1) on a ubuntu 24.04 LTS machine with 4 logical cores. I included highs in my CMake project.
The code fragment containing the call to highs with the memory leak looks something like that:
When my application has terminated the leakSanitizer asserts a bunch of Direct leaks like the following examples:
Furthermore, there occur also indirect leaks. If considered helpful I can provide the full list of direct and indirect memory leaks. So far I tried to spot the leak by compiling and installing highs again, but with lines 350 - 353 of HiGhs' top level CMakeLists.txt commented in. But this did not make the error messages more verbose.
The total size of the memory leaks is instance-dependent and is about a few hundred kB for a toy instance and about 9 MB for a medium instance.
So my question is: Am I using HiGHs not as intended or is there indeed a memory leak in the libhighs.so?