Replace the "win chance cache" that is currently used in the game to quickly display win chance percentages up to 1000 troops with a much much more efficient set of caches which are both overall smaller and are also able to completely caclculate both win chance and full outcome distributions up to typical floating point accuracy for up to 1000 troops.
Approximate both win chances and outcome distributions for rolls over 1000 troops using a bell curve for the section of the algorithm that actually converges to a bell curve. This can still be computed quickly (about 1 millisecond per 10000 troops on my machine) but is also much more accurate than the approximation algorithm currently in use (individual outcome chances are within 0.01% in the worst cases and the range of all possible outcomes in balanced blitz is almost always off by one troop or less).
The new algorithm works by separately computing distributions when both players roll the maximum number of dice and when at least one player is rolling less than the maximum number, and by caching way fewer values for these distributions by ignoring probabilities that are so small that they will get lost in floating point rounding.
When at least one player is rolling less than the maximum number of dice we cache outcome distributions in a pretty similar way to the old algorithm, with one full outcome distribution for each attacker/defender pair. However, when keeping the size of one army at a very small fixed size, the distribution of losses as the size of the other army grows very quickly approaches a fixed distribution, so above a certain size of the larger army, just dropping every probability under a given threshold will yield literally the same outcome distribution for every army size. So we only cache results up until this point and can then cache an accurate outcome distribution for every possible outcome with only a fixed size cache.
When both players have enough troops to roll the maximum number of dice, they will each roll the same number of dice again and again until one of them loses enough troops that they cannot roll the maximum number of dice. There are a fixed set of possible remaining troop combinations to have when we are done with identical rolls, and we can compute a distribution for the chance on ending on each remaining troop combo from any possible starting number of troops. This computation can be very efficient if we have cached distributions for every possible outcome of running k maximum dice rolls for every value of k. We can then combine this distribution with the distributions of outcome chance for every possible non-max dice roll to get the overall outcome chance distribution.
Storing the outcome distribution of k maximum dice rolls for every value of k is not feasible memory wise for very large armies and iteratively computing it is a bit too slow. However this distribution converges to a gaussian distribution with easily computed mean and variance, and by caching the exact results for k up to 1000 we only need to compute distributions that are very close to this gaussian approximation.
I tested the new algorithm against every win chance up to 1000 as computed using the old algorithm and these do match (they can be off by around 1 in a million due to floating point rounding but for no matchup does the effect of this rounding make the difference between whether or not the 5% win chance cutoff is applied). The full outcome distribution also matches up to 200 troops but I haven't invested the CPU time to actually compute distributions up to 1000 on the old algorithm. (All of this testing is repeated 7 times when factoring in all modifier combinations that can actually happen in the game.)
The algorithms here accomplish two main things:
The new algorithm works by separately computing distributions when both players roll the maximum number of dice and when at least one player is rolling less than the maximum number, and by caching way fewer values for these distributions by ignoring probabilities that are so small that they will get lost in floating point rounding.
When at least one player is rolling less than the maximum number of dice we cache outcome distributions in a pretty similar way to the old algorithm, with one full outcome distribution for each attacker/defender pair. However, when keeping the size of one army at a very small fixed size, the distribution of losses as the size of the other army grows very quickly approaches a fixed distribution, so above a certain size of the larger army, just dropping every probability under a given threshold will yield literally the same outcome distribution for every army size. So we only cache results up until this point and can then cache an accurate outcome distribution for every possible outcome with only a fixed size cache.
When both players have enough troops to roll the maximum number of dice, they will each roll the same number of dice again and again until one of them loses enough troops that they cannot roll the maximum number of dice. There are a fixed set of possible remaining troop combinations to have when we are done with identical rolls, and we can compute a distribution for the chance on ending on each remaining troop combo from any possible starting number of troops. This computation can be very efficient if we have cached distributions for every possible outcome of running k maximum dice rolls for every value of k. We can then combine this distribution with the distributions of outcome chance for every possible non-max dice roll to get the overall outcome chance distribution.
Storing the outcome distribution of k maximum dice rolls for every value of k is not feasible memory wise for very large armies and iteratively computing it is a bit too slow. However this distribution converges to a gaussian distribution with easily computed mean and variance, and by caching the exact results for k up to 1000 we only need to compute distributions that are very close to this gaussian approximation.
I tested the new algorithm against every win chance up to 1000 as computed using the old algorithm and these do match (they can be off by around 1 in a million due to floating point rounding but for no matchup does the effect of this rounding make the difference between whether or not the 5% win chance cutoff is applied). The full outcome distribution also matches up to 200 troops but I haven't invested the CPU time to actually compute distributions up to 1000 on the old algorithm. (All of this testing is repeated 7 times when factoring in all modifier combinations that can actually happen in the game.)