The KL-divergence seems to be a perfect fit for estimating the difference away from Maxwellian:
$$
\sum_x p(x) \ln \frac{p(x)}{q(x)}
$$
where $p(x)$ is the target distribution and $q(x)$ is the reference distribution (in this case Maxwellian).
Surprisingly or not, the idea of taking a log here matches my earlier thought that the current non-Maxwellianity formula gives a biased range of values.
The KL-divergence seems to be a perfect fit for estimating the difference away from Maxwellian:
$$ \sum_x p(x) \ln \frac{p(x)}{q(x)} $$
where $p(x)$ is the target distribution and $q(x)$ is the reference distribution (in this case Maxwellian).
Surprisingly or not, the idea of taking a log here matches my earlier thought that the current non-Maxwellianity formula gives a biased range of values.