Thanks for providing this nice library.
I had the case that some really unlikely phenotypes yielded p-values of 0 that were due to (I believe) the GLM not converging, so I think it'd make sense to always check that (it took us quite some time to figure out what was the probable cause of the error). My specific problem happened when one of the two classes in a binary phenotype only had very few instances (e.g. 20 of 39000 in my case). Then, especially when including --sensitivity, the whole glm-fitting is numerically highly unstable and mylogit$converged==FALSE. When also computing confidence intervals, this sometimes seems to be automatically checked, but I still get several phenotypes with non-converged GLMs (although at least the p-values are fine in these cases, i.e. >0.05). I would assume that the same problem could occur in the case of categorical phenotypes.
Thanks for providing this nice library. I had the case that some really unlikely phenotypes yielded p-values of 0 that were due to (I believe) the GLM not converging, so I think it'd make sense to always check that (it took us quite some time to figure out what was the probable cause of the error). My specific problem happened when one of the two classes in a binary phenotype only had very few instances (e.g. 20 of 39000 in my case). Then, especially when including
--sensitivity
, the whole glm-fitting is numerically highly unstable andmylogit$converged==FALSE
. When also computing confidence intervals, this sometimes seems to be automatically checked, but I still get several phenotypes with non-converged GLMs (although at least the p-values are fine in these cases, i.e. >0.05). I would assume that the same problem could occur in the case of categorical phenotypes.