gokceneraslan / fit_nbinom

Negative binomial maximum likelihood estimate implementation in Python using L-BFGS-B
16 stars 10 forks source link

log likelihood wrong? #4

Open PDiracDelta opened 4 years ago

PDiracDelta commented 4 years ago

EDIT: I just checked the results for my data set and compared to R output. Indeed, your version seems to give the correct answers. Why, though (read my post below)?

in log_likelihood I think you switched the terms with p and 1-p. If you check on wikipedia, the p is in the term with ln that is summed over, and the 1-p is in the term with ln that is not summed over. In your code, it's the other way around. You can verify using the R ?dnbinom that its definition of p is identical to that of wikipedia (you are using R's suggestion for estimating p (or prob).

Also, you are returning -result instead of result, which I also think is incorrect.

suntzuisafterU commented 4 years ago

Maybe these 2 things are related? Swapping p and 1-p and swapping result with -result?

PDiracDelta commented 4 years ago

When you put p=1-p at the beginning and return resultinstead, you get a different result, so that's not how that works. strangely enough I tested this a while ago and if I recall correctly, the current version does give the same result as when you fit in R ... How is that possible?