linkedin / DuaLip

DuaLip: Dual Decomposition based Linear Program Solver
https://linkedin.github.io/DuaLip/
BSD 2-Clause "Simplified" License
59 stars 10 forks source link

Fixed performance bottleneck of parallel solver #18

Closed ksalomat closed 2 years ago

ksalomat commented 2 years ago

An auxiliary Encoder objects initialization took a long time, effectively becoming a bottleneck of parallel solver (based on IntelliJ profiler). The object is initialized dynamically (via spark implicits) every time the aggregation or transformation is performed on the problem data, which means several times per algorithm iteration. For a single distributed problem it is not an issue. For parallel solving of millions of small problems it is re-created unnecessarily. The solution is to recylce a small set of encoder object (via static references). Note: the encoder is not even necessary for parallel solver, but it is hard to get around the API requirements made for distributed solver, so we pass the encoder anyway. It would be nice to refactor the code in such a way that unnecessary encoder object is not created.

Attached profiler screenshot shows that call to org.apache.spark.sql.SQLImplicits.newProductEncoder(TypeTags$TypeTag) takes 97.68% of its parent call (primal variable computation).

Results: For 200 executors, 1M dataset New run: 1h 15m Old run: 1d 15h 15m

The results suggest ~30X improvement, may not be directly comparable due to cluster conditions variability.

Screen Shot 2022-02-15 at 1 14 16 PM
ksalomat commented 2 years ago

@basukinjal MatchingSolver does not use this functionality yet, so no need to do it.