Closed CyberiaResurrection closed 7 months ago
@tjoneslo , I think the relative edge exhaustion speeds are the main culprit here.
Going back to our pathological J7 edge of initial weight 10082: Excess is thus 10075.
At reuse=10, epsilon is 0.1, so excess to exhaust that edge is 0.7 - a ratio of 14,392.6. Each time that edge gets used in a route, that excess is multiplied by (1 - epsilon), or 0.9 in this case. 91 updates exhausts that edge and precludes further updates from tripping approx-SP updates.
At reuse=1000, epsilon is 0.001, so excess to exhaust that edge is 0.007 - a ratio of 1,439,285.8. Each time that edge gets used, excess is multiplied by (1 - epsilon), or 0.999 in this case. 14,173 updates are needed to exhaust that edge with reuse=1000, vs 91 with reuse=10. Assuming epsilon of 0.1 while reuse stays at 1000, 9,570 updates are needed.
That's first part of the blowout - roughly 100x-150x more hits required to exhaust the edge. The second part - which seems to be very much by design, just how it interacts with exhaustion is the HMAS That's Very Annoying - as reuse parameter increases, edges are reused less.
Over ye olde 4 sector Zhodani heartland test: reuse=5 has first edge (on any route) exhaust after 291 routes. reuse=1000 has first edge (on any route) exhaust after 30,346 routes. Again, taking roughly 100x more routes to first exhausted edge (that testbed having 108,247 routes in total).
Having all components have the exact same number of landmarks is infeasible - if (say) all components have 9 landmarks, what happens for individual components with fewer than 9 stars?
This PR scales number of landmarks, L, as a function of size, N: L = min ( 15, ceil (3 * log10( N ) )
Single-star components have zero landmarks, but are never hit in pathfinding, so they're not a problem. 10-star components have 3 landmarks, 100-star landmarks have 6 landmarks, 1,000-star landmarks have 9 landmarks, 10,000-star components have 12 landmarks and 100,000-and-up-star components have the maximal 15 landmarks.
The first six landmarks are the extremes along the q, r and s axes (extending the status quo). The seventh landmark keys off an observation I tripped over - all else equal, having the pathfinding target be a landmark speeds up pathfinding compared to a non-landmark. Where both source and target stars are not landmarks, or are both landmarks, no problem, status quo prevails. Likewise where the target star is a landmark. Where the source is a landmark and the target isn't, collect the speed gain by transposing the two before that pathfinding run stars. Source-target transpose has been good for 1-3% fewer nodes expanded during testing.
Eighth and later landmarks are generated using the "avoid" algorithm, as follows:
The avoid algorithm seeks out gaps in bound coverage, and iteratively fills them in.
On top of that, I collected a further speedup by ensuring every pathfinding attempt starts with a finite upper bound (extending upper bound preheading), and dropping the has_bound checks in astar_numpy.
Further testing showed: Some wombat was selecting duplicate landmarks; It was trivial to sort component stars by descending WTN before picking landmarks; The ceiling on # of landmarks has to be dialled back markedly with higher route-reuse values; Aforementioned wombat was collecting enough data that it was worth automating under debug flag;