melff / mclogit

mclogit: Multinomial Logit Models, with or without Random Effects or Overdispersion
http://melff.github.io/mclogit/
22 stars 4 forks source link

can I conduct a likelihood ratio test with output from a mixed effects model in mblogit? #9

Closed rebeccaltaylor closed 4 years ago

rebeccaltaylor commented 4 years ago

Hi.

I have been fitting fixed effects models with mblogit and then conducting likelihood ratio tests using the anova command and the pchisq command.

Today, my first mixed effects model converged and I checked to see which method is better to use for conducting a likelihood ratio test: MQL or PQL. Unfortunately, the information I read suggests neither.

Can you please tell me if there is a way to conduct a likelihood ratio test with the output from a mixed effects model in mblogit, and if not, can you suggest a test that will work. I am testing the effect of a single treatment, in the presence of a random effect of individual, and sometimes with other fixed covariates. My response variable has three categories, and I am testing for an effect in both logits.

I will paste some sample lines of code below.

Thank you very much!

Rebecca

MBLrWM <- mblogit(aBehavior ~ Trt15_20_10min, random=~1|Deployment, data=Swm, maxit = 100, method="MQL")
MBLrW0M <- mblogit(aBehavior ~ 1, random=~1|Deployment, data=Swm, maxit = 100, method="MQL")
anova(MBLrW0M,MBLrWM) pchisq(0.53241,df=2,lower.tail = FALSE)

rebeccaltaylor commented 4 years ago

I am so sorry to bother you with another question, but I just looked at the variance-covariance of the random effects from the first model call given above, one with the MQL method, and one with the PQL method. They appear to be very different, and I have reprinted them below. Anything you can do to help me would be very much appreciated.

MQL:

(Co-)Variances:

Grouping level: 1

Estimate Std.Err.

For~1 2.7604 0.8721

HO~1 -0.6779 8.9009 1.1230 2.7437

PQL:

(Co-)Variances:

Grouping level: 1

Estimate Std.Err.

For~1 1.9713 0.6556

HO~1 0.3991 3.7709 0.5324 0.7616

melff commented 4 years ago

Please direct support question to me via email instead of using the issues interface (which is for bug reports).

Usually one should be careful when interpreting the results of anova() when approximate likelihood methods are used for estimation. This applies in particular to PQL and MQL estimates, which are known to be biased in particularly when clusters are small. In the case of your result, the conclusion is pretty unequivocal, however, the data do not lead support to rejecting the null hypothesis. But anyway, if you want to be sure, I would suggest using simulation-based p-values.

The difference between the PQL and MQL estimates may be sampling fluctuations of the biases of PQL and MQL going into different direction. This is of course difficult to decide unless one knows the true parameter values.

rebeccaltaylor commented 4 years ago

Dr. Elff,

Thank you for your fast and helpful reply--my apologies for posting to GitHub. I hope it is okay to reply to this address. I tried to put the following address into the "To" line, but my email program would not send it: martin.elff’at’zu.de .

I have other situations that will not be as clear-cut as the example I provided, so I appreciate your suggestion regarding the simulation-based p-values.

Thanks again and cheers,

Rebecca


From: Martin Elff notifications@github.com Sent: Wednesday, June 24, 2020 12:26 PM To: melff/mclogit mclogit@noreply.github.com Cc: Taylor, Rebecca L rebeccataylor@usgs.gov; Author author@noreply.github.com Subject: [EXTERNAL] Re: [melff/mclogit] can I conduct a likelihood ratio test with output from a mixed effects model in mblogit? (#9)

Please direct support question to me via email instead of using the issues interface (which is for bug reports).

Usually one should be careful when interpreting the results of anova() when approximate likelihood methods are used for estimation. This applies in particular to PQL and MQL estimates, which are known to be biased in particularly when clusters are small. In the case of your result, the conclusion is pretty unequivocal, however, the data do not lead support to rejecting the null hypothesis. But anyway, if you want to be sure, I would suggest using simulation-based p-values.

The difference between the PQL and MQL estimates may be sampling fluctuations of the biases of PQL and MQL going into different direction. This is of course difficult to decide unless one knows the true parameter values.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/melff/mclogit/issues/9#issuecomment-649052074, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACARZ6WOJUBJ7QSICRF4R33RYJOORANCNFSM4OGF74KQ.