Closed NatJWalker-Hale closed 10 months ago
Dear @NatJWalker-Hale,
You are correct in that p
are constrained to be the same between Test
and Reference
. It is as intended by the original paper (see Fig 1, also attached here). A different model (Partitioned Exploratory
) does not impose this constraint.
In "live" data without constraints, you would never (well, almost never) expect the estimates to coincide. Here they are forces to be the same by design.
Best, Sergei
Dear @spond,
Ah, thanks so much for clarifying! I had misread the caption of Fig. 5 in the paper and interpreted those as results from the RELAX alternative. As far as presenting results in publication, is it advisable to present results from the partitioned descriptive fit (alongside p-values from the comparison of RELAX null and alternative) as you do in Fig. 5 of Wertheim et al.?
Thanks again,
Nat
Dear @NatJWalker-Hale,
I would view the partitioned descriptive fit almost like a "normality check" for t-tests or ANOVA. If the partitioned descriptive model has a much better fit to the data than the RELAX alternative model (measured by a sufficiently large Δ AIC-c, say 10), then you might infer that the distributional assumptions (same proportions) of the RELAX tests may not be the most appropriate for the data at hand, so the RELAX test should be interpreted with caution.
Best, Sergei
Okay, understood, many thanks for clarifying!
Hi guys,
I'm looking at some RELAX fits from hyphy v2.5.53 where all the genes appear to have the same p0, p1, p2 between reference and test partitions under the RELAX alternative model, with significant results only varying in w0, w1, w2. I had a couple of questions about this.
Thanks a lot,
Nat