Closed JDCalvert closed 2 years ago
I'm not going to dive too deep into the code for now, as I'd like to discuss some conceptual topics first.
The background caching doesn't work on machines that are so slow that they struggle to maintain the configured frame rate. Maybe the idle callbacks are never being called, because strictly speaking there is no idle time. This is rather unfortunate, since users with slow machines would likely benefit the most from background caching. Perhaps we can find a way to make it work there as well? Do you think it's possible to move the background caching task into a Web Worker, so we can utilize an additional CPU thread?
I think - as you already suggested - starting the background caching immediately when the world loads would be more beneficial than starting it on drag. That way it would be immediately ready once users start using it (I think in most games a scene will remain static after being loaded, while the gm describes what's going on, so the background task has plenty of time to complete until tokens are being dragged).
I think the background caching algorithm could be simplified quite a lot by not starting the caching task from the token and working away form there but instead starting in the top left corner, initializing the squares row by row. This would remove the need for an explicit queue, and it would also no longer necessary to explicitly track which squares were already initialized. As mentioned before, I think in games there would be quite a delay between loading a scene (or changing the scene by opening a door for example) and tokens being dragged, since the GM will probably start some kind of narration at those events, so the user experience shouldn't be much worse.
- Limit the pathfinding algorithm to a certain amount of time/iterations.
- Carry on processing the same path in the background, unless we need to start a different path.
- Don't recalculate the path if it's the same start/end as the last path calculated.
- Levels compatibility: keep multiple caches for different elevations, maybe the last two or three elevations used, so we don't have to regenerate going back and forth between elevations.
These are independent of background caching and as a result I'd like to discuss those somewhere outside this PR.
The idea of web workers is interesting. I'll have a look into it and see if they'll work for this. As long as they can resolve collision detection, it should work absolutely fine.
In terms of performing caching simply as a grid rather than working out from the token, as long as the caching is sufficiently fast, and isn't too big, this should be fine. One of the potential advantages of caching from the token's position is that we never need to cache places that are impossible to path to. Again, if caching is sufficiently fast and the cache isn't a significant size in memory, then that advantage is minimal.
Caching nodes can be the slowest part of the pathfinding algorithm, so we should do as much of this as possible before the information is needed.
When we select a token, a token we have selected is updated, or start pathfinding, we'll check if we have a cache already built for the token's current state. If we don't, then we'll start building the cache so then by the time we start pathfinding, a lot of the caching work is done.
For the best results, we want to start pathing from the same place we build our cache - that way we're most likely to already have some nodes cached along the path and reduce the total time to complete the path. To that end, I've changed the algorithm to work from the token to the target rather than the other way around. We then create a linked list of nodes from the start of the path to the end, so the node returned from
calculatePath
is still the start of the path.