I have started using your meta package and it is a very useful piece of software, so thanks a lot for publishing and maintaining it!
However, I have come across a strange bug which I think is due to some numeric issues.
I have the following data:
mean.e
sd.e
mean.c
sd.c
n.e
n.c
-2.08
0.53
-2.15
0.32
696
558
-2.60
0.55
-0.24
0.26
1028
808
Which is part of a larger dataframe, but that is irrelevant here. I noticed that these two studies get assigned a value for SMD if and only if I use Cohen's d, but even then they don't get any weight or CI. I played with the table to see what might be causing this problem and it seems like it is the large sample sizes of the studies. Below are three analyses, once with the full data, once with n= 200 for both studies and groups and once with n=100 for both studies and groups.
> results <- metacont(n.e=c(100,100),
+ mean.e=mean.e,
+ sd.e=sd.e,
+ n.c=c(100,100),
+ mean.c=mean.c,
+ sd.c=sd.c,
+ random = T, studlab = 1:2,
+ data = data3, sm ="SMD",
+ method.smd="Cohen")
So the problem occurs somewhere between an n of 100 and 200 per group. I assume this is a rounding error, i.e. that something in the calculation of the CI is becoming so small that it is rounded to 0 and subsequently the study gets assigned no weight.
Florin,
Thank you for pointing out this bug in the exact SMD method. I just submitted meta, version 5.1-1 which should be available on CRAN in the coming days.
Best,
Guido
Hi Guido,
I have started using your meta package and it is a very useful piece of software, so thanks a lot for publishing and maintaining it!
However, I have come across a strange bug which I think is due to some numeric issues.
I have the following data:
Which is part of a larger dataframe, but that is irrelevant here. I noticed that these two studies get assigned a value for SMD if and only if I use Cohen's d, but even then they don't get any weight or CI. I played with the table to see what might be causing this problem and it seems like it is the large sample sizes of the studies. Below are three analyses, once with the full data, once with n= 200 for both studies and groups and once with n=100 for both studies and groups.
So the problem occurs somewhere between an n of 100 and 200 per group. I assume this is a rounding error, i.e. that something in the calculation of the CI is becoming so small that it is rounded to 0 and subsequently the study gets assigned no weight.
Is there any way I can fix/prevent this?
Cheers, Florin