Open JamboChen opened 1 month ago
I've experimentally implemented the PD algorithm (the one used by R) and replaced the rejection sampling method. It passed the lambda=1e9
test and significantly improved performance in my computer's "poisson/100" and "poisson/variable" benchmarks.
Additionally, I found that the test parameter 1.844E+19
in the Poisson KS test actually exceeds the i64
range, and the gamma_lr
function may get stuck in a long loop with large numbers.
You can find my implementation here: Implementation Link
We cannot accept straight translations from R code which is distributed under the GPLv2. But I assume this is same algorithm published here in Fortran?
1.844E+19
is approx 2^64, and is MAX_LAMBDA
(introduced in #1498).
But the values of λ used in the test are 0.1, 1.0, 7.5
then 1e9
(disabled) and 1.844E+19
(disabled). Our implementation uses a different algorithm for λ < 12
and λ ≥ 12
, so the latter isn't currently tested at all (and has no tests with "reasonable" parameters).
We cannot accept straight translations from R code which is distributed under the GPLv2. But I assume this is same algorithm published here in Fortran?
My implementation follows the textual description of the PD algorithm in case A (μ ≥ 10) from the referenced paper (page 11 of the PDF). In fact, the R implementation uses some goto statements, which, even if permissible from a licensing perspective, are difficult to directly translate.
Our implementation uses a different algorithm for λ < 12 and λ ≥ 12, so the latter isn't currently tested at all
Locally, I enabled the 1e9 test, and it passed. However, due to the slow performance of the gamma_lr function (it took 70 seconds to run on my machine), I haven't pushed this part of the changes yet.
Regarding the discrete KS test function, it may need to be generalized to accept a CDF function of the form f(u64) -> f64
. Otherwise, most of the samples in the 1.844E+19
test will overflow when converted to i64
. I made the corresponding changes locally (but have not pushed them yet), though it seems that the gamma_lr function is getting stuck in an infinite loop.
I guess is should be fine to test with 1.844E+19 / 2
, I do not see a current need for a f(u64) -> f64
Further to this, we have a Poisson
impl for Distribution<u64>
. We should test this, which seems to be simple enough:
for (seed, lambda) in parameters.into_iter().enumerate() {
let dist = Poisson::new(lambda).unwrap();
test_discrete::<f64, Poisson<f64>, _>(seed as u64, dist, |k| cdf(k, lambda));
test_discrete::<u64, Poisson<f64>, _>(seed as u64, dist, |k| cdf(k, lambda));
}
I recently conducted some challenging tests, and both the current implementation and the PD algorithm I implemented fail the KS test when lambda=1e12, but they pass when lambda is <=1e11, even though the variance obtained from the PD algorithm appears more reasonable. It seems that when the parameter is too large, the convergence speed of the gamma function becomes too slow, and each test takes more than an hour to run on my computer.
These tests are purely experimental, and it's hard to imagine someone actually needing a Poisson distribution with such large parameters. Running these tests in a GitHub Action environment is also unrealistic.
Although the PD algorithm does not pass the tests with extreme parameters, the current benchmarks show that its sampling speed is significantly faster than the current rejection sampling algorithm.
poisson/100 time: [63.7510 cycles 64.1892 cycles 64.7345 cycles]
change: [-82.680% -82.506% -82.335%] (p = 0.00 < 0.05)
Performance has improved.
Found 6 outliers among 100 measurements (6.00%)
3 (3.00%) high mild
3 (3.00%) high severe
poisson/variable time: [7804.7153 cycles 8284.4701 cycles 8809.4121 cycles]
thrpt: [8809.4121 cycles/100 8284.4701 cycles/100 7804.7153 cycles/100]
change:
time: [-70.306% -68.601% -66.772%] (p = 0.00 < 0.05)
thrpt: [+200.95% +218.48% +236.76%]
Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
6 (6.00%) high mild
1 (1.00%) high severe
I am pretty sure the current approach cannot be fixed to work for high lambda and it seems very plausible that it is so much slower. So we should definitely replace it completely.
At some point we could just use the normal approximation. Unfortunately it is hard to quantify how much error you do. The best upper bound I know is O(lambda^(-1/2)) but it should actually be much better.
If it passes the KS test, I guess the normal distribution should be fine.
Summary
Poisson distribution where, at high lambda values, the sample variance deviates noticeably from the mean.
During a Kolmogorov-Smirnov (KS) test, the Poisson distribution fails when lambda approaches
MAX_LAMBDA
. I conducted a simple test to examine the sample mean and variance, and observed the following results:For samples from a Poisson distribution, the sample mean and variance should be very close to lambda.
For reference, I also checked the sample variance of Poisson generators in NumPy and R (I haven't conducted a full goodness-of-fit test yet).
NumPy implementation for
lam>=10
is based on the paper (The transformed rejection method for generating Poisson random variables)R implementation is based on the paper (Computer generation of Poisson deviates from modified normal distributions)
Code sample