tomfaulkenberry / JASPbook

Source files for "Learning Statistics with JASP"
32 stars 15 forks source link

some comments about 8.5.2 and 8.5.3 #22

Open michaeladamkatz opened 3 years ago

michaeladamkatz commented 3 years ago

When you say

Okay, so you can see that there are two rather different but legitimate ways to interpret the p value, one based on Neyman’s approach to hypothesis testing and the other based on Fisher’s.

I'm having trouble grasping that the two are "rather different". On the one hand, we say "what error rate are you willing to tolerate?" and on the hand we say "what is the chance that you might have gotten this particular data given that the null hypothesis is true?" It may be just that I'm not understanding, but to me those sound quite similar (and I personally prefer the later because it feels like a simpler way to say it).

But the main point I want to make is that it seems like the standard/Fisher definition is rushed over too quickly in 8.5.2, with just the sentence "we can define the p-value as the probability that we would have observed a test statistic that is at least as extreme as the one we actually did get". That's an abstract-sounding mouthful, and given that this concept is so fundamental, and that the Fisher formulation is the standard, it seems to me it would be good to restate that in terms of the ESP experiment numbers, as is done in section 8.5.1. So it would be a statement like, "On Fishers definition, by saying p = .021, we are saying that there is a 2.1% chance that, even though the null hypothesis is true (that is, theta does actually equal 0.5 and ESP is in fact bogus), I would nonetheless get the 62 out of 100 result that I got in my particular experiment."