PyVRP / PyVRP

Open-source, state-of-the-art vehicle routing problem solver in an easy-to-use Python package.
https://pyvrp.org/
MIT License
310 stars 53 forks source link

Performance stuff #416

Open N-Wouda opened 11 months ago

N-Wouda commented 11 months ago

Discussion thread for ideas to improve PyVRP's performance. Could be low-level stuff, like better code optimisation, smaller data types, and/or caching. Or a smarter data structure, better algorithm, etc. Anything's in scope.

Edit: there's also perhaps some stuff in #181 that we might want to implement as part of this.

N-Wouda commented 11 months ago

One idea that might help a bit relates to how we skip move evaluation when nothing's changed since the last time we considered the client pairs $U$ and $V$:

https://github.com/PyVRP/PyVRP/blob/7604244262d342f14b649f1845ecd30eda56bc7e/pyvrp/cpp/search/LocalSearch.cpp#L92-L93

This works fine, but any change in the routes of $U$ and $V$ are now reason to re-evaluate. Even when that change happens many nodes after $U$ or $V$, which has no consequences for the evaluation.

We might make this more efficient by taking into account the size of our move operators (e.g., 1, 2, 3, etc. nodes), and checking whether there has been a recent modification that affected nodes sufficiently near either $U$ or $V$ to warrant a new evaluation.

I think I know how to implement this efficiently, but I'm lacking the time right now to pursue it. This will probably have to wait until PyVRP 0.8.0.

wouterkool commented 11 months ago

This works fine, but any change in the routes of U and V are now reason to re-evaluate. Even when that change happens many nodes after U or V, which has no consequences for the evaluation.

I like the idea but I'm not sure if this is correct? At least with time windows, removing a node towards the end of the route may create slack which enables inserting a customer early in the route that would previously result in a large penalty, even if the affected parts of the route are far apart.

This situation may be hypothetical or not happen frequently so we may decide heuristically to skip those options but that's a functional change of the algorithm, not a more efficient implementation with the same result. Or am I missing something?

N-Wouda commented 11 months ago

This situation may be hypothetical or not happen frequently so we may decide heuristically to skip those options but that's a functional change of the algorithm, not a more efficient implementation with the same result. Or am I missing something?

Not really. This conclusion seems correct to me.

N-Wouda commented 8 months ago

Much more aggressive caching is also in scope here. Routes are queried at least two orders of magnitudes more than that they are changed. Every time something between two indices is calculated, can we somehow store the result? Is that worthwhile?

N-Wouda commented 7 months ago

Also relevant stuff that @leonlan and I discussed earlier today:

leonlan commented 7 months ago

Improve our neighbourhood strategy somehow? I know FILO has a much more advanced neighbourhood system, and we might be able to learn something from how that works exactly.

Leaving out some details, FILO uses a granular neighborhood that is 1) node-based and 2) dynamic. 1) Node-based means that the number of neighbors is different per node. Let $n{\text{neighbors}}$ denote the maximum number of neighbors. The active number of neighbors is between $[1, n{\text{neighbors}}]$ and is dynamically updated. 2) Based on whether a node is involved local search moves is improving, the number of neighbors is updated. This is done using a "sparsification factor" $\gamma_i \in [0, 1]$ for each node $i$. The number of neighbors at any time is given by $\gammai * n{\text{neighbors}}$ and $\gamma_i^0$ being the initial sparsification factor.

I don't really understand yet what the "node involved in improving move" means, but these two improvements will likely improve the solver.

Implementation details:

N-Wouda commented 6 months ago

Do people actually use add_node_operator and add_route_operator to add custom node/route operators? Or do they just use whatever default operators we have selected? I suspect the latter. If that's the case we might want to investigate how much it'd help the compiler optimise further if we bring all operators back into the LocalSearch object.

That is probably worth investigating regardless of whether we want to do it.