Closed glennmagerman closed 9 years ago
Hi Glenn.
rpldis(4000, 1, 2)
and trying OLS. That way you know the correct answer.Thanks
Hi Collin,
many thanks for your quick response!
#fit log-normal
m_ln = conlnorm$new(data)
est = estimate_xmin(m_ln)
m_ln$setXmin(est)
# bootstrap fit
bs_ln = bootstrap_p(m_ln, no_of_sims=10, threads=8, seed=1)
One thought I had was that the problem occurs when you generate random numbers. I implemented a simple rejection algorithm:
N
random numbersxmin
. To make this a bit more efficient, I calculate the expected number of rejections and compensate by simulating a larger N
. However, this may make N
very large. To test this idea, what are your parameter and xmin
values in est
.
Hi Collin,
sorry for the very late reply. I got things to converge by rescaling them. I had sales in euros, and converted those to millions of euros, and everything goes smooth... Very strange though, since I know of some non-linear estimators that are not scale-invariant (e.g. negative binomial), but I would suspect that a scale free distribution would also have scale-invariant estimators if all goes well? This reminds me of some estimations with poisson pseudo maximum likelihood, where convergence is not achieved for very large values of x, and rescaling can help for convergence. however, estimates are the same, independent of scale (and absorbed in the constant of the regression for example) Do you think that is what is going on here, maybe through the calculation of the normalizing constant ? If that would be true, I think this deserves some more investigation in general for the MLE... It seems far-fetched in this setting, but maybe the key lies in the normalizing constant. Best, Glenn
I'm glad you solved your problem. To be honest, I'm not entirely sure what's going on. Sorry.
Hi Collin, first off: many thanks for your work. this is amazing!
2 (probably very silly questions):
All the best, Glenn