There are several times where the program stops with the message "A Bayes factor is infinite". This is because, particularly when n is large, the Bayes factor (compared to the null model) tends to be a very, very large number.
In the documentation we have given hints to try to avoid this problem. The main idea is to use a null model which is better so the magnitude of Bayes factors is reduced. This only works on limited situations.
With this issue and the corresponding branch I open the possibility of tackling this problem re-scaling all the Bayes factor (in log scale) using a very small number ESTAB.CONST so that none of the Bayes factors is zero and those with a very large number become highly reduced.
Obviously, if ESTAB.CONST is too large, we could be assigning a BF=0 to some models (many?). Suppose the minimum value of BF is min.BF; since L.MIN:=log(.Machine$double.xmax)=-708.3964 we can take ESTAB.CONST=log(min.BF)-L.MIN as the largest constant that does not kill any model.
Of course, we do not know the value of min.BF. If we use min.BF we would be killing all models with BF<1. ALternatives? For the gZellner prior, min.BF>exp(((n-p)/2.0)log(1.0+n)-((n-k0)/2.0)log(1.0+n*1)) (would correspond to a model extremely complex without any fit.
There are several times where the program stops with the message "A Bayes factor is infinite". This is because, particularly when n is large, the Bayes factor (compared to the null model) tends to be a very, very large number.
In the documentation we have given hints to try to avoid this problem. The main idea is to use a null model which is better so the magnitude of Bayes factors is reduced. This only works on limited situations.
With this issue and the corresponding branch I open the possibility of tackling this problem re-scaling all the Bayes factor (in log scale) using a very small number ESTAB.CONST so that none of the Bayes factors is zero and those with a very large number become highly reduced.
Obviously, if ESTAB.CONST is too large, we could be assigning a BF=0 to some models (many?). Suppose the minimum value of BF is min.BF; since L.MIN:=log(.Machine$double.xmax)=-708.3964 we can take ESTAB.CONST=log(min.BF)-L.MIN as the largest constant that does not kill any model.
Of course, we do not know the value of min.BF. If we use min.BF we would be killing all models with BF<1. ALternatives? For the gZellner prior, min.BF>exp(((n-p)/2.0)log(1.0+n)-((n-k0)/2.0)log(1.0+n*1)) (would correspond to a model extremely complex without any fit.
The proposal is hence using:
ESTAB.CONST=log(1.0+n)*(k0-p)/2.0+705.0
I start incorporating this idea gradually.