If this library ever has hope of being deployed in a production context, it needs to be at least twice as fast.
There are two main methods for doing this:
Algorithmically: alpha beta pruning, other techniques
Through profiling: finding bottle necks and widening them
Consider memory usage as well. The lower the better, though this isn't as much of a concern unless the memory usage is truly brutal.
Either way, some starting data should be collected to get some idea of where things are now. Then, any optimizations should be compared to that data. Use a negamax vs. negamax AI to do the profiling.
If this library ever has hope of being deployed in a production context, it needs to be at least twice as fast.
There are two main methods for doing this:
Consider memory usage as well. The lower the better, though this isn't as much of a concern unless the memory usage is truly brutal.
Either way, some starting data should be collected to get some idea of where things are now. Then, any optimizations should be compared to that data. Use a negamax vs. negamax AI to do the profiling.
Haskell profiling resources: