Closed SquidPony closed 2 years ago
A* and Dijkstra are completed but not yet generalized
AStar is vastly improved now (directed and undirected versions on an arbitrary graph), and DijkstraMap is still quite good (I'm trying to make sense of some benchmarks where DijkstraMap is significantly faster than either gdx-ai's or SquidLib's AStar implementation, on top of having extra features like multiple-goal pathfinding). I still don't know what Q-Learning is, nor Value Iteration, but some kind of ability to assign reward and risk values to actors would be nice. I'm not sure how it could be applied to a library, though, since so much of AI is application-specific. I think this issue can be closed in SquidLib, but if anyone has ideas for how to assign risk and reward to arbitrary actors and areas in a library context, feel free to create an issue in SquidSquad for future development.
some examples might be:
A* Dijkstra Q-Learning Value Iteration
given arbitrary data sets with reward mappings