RobinHankin / hyper2

https://robinhankin.github.io/hyper2/
5 stars 3 forks source link

specificp.test() #44

Open RobinHankin opened 4 years ago

RobinHankin commented 4 years ago

As it is, function specificp.test() tests the hypothesis that a single strength has a particular value. But something like

specificp.test(volvo2014,c(1,3), c(0.2,0.23))

should make sense (testing the hypothesis that p_1=0.2, p_3=0.23). Or maybe

specificp.test(volvo2014,c("AbuDhabi"=0.1, "DongFeng"=0.12))
RobinHankin commented 4 years ago

Actually it's not as straightforward as I thought. Function specificp.test() currently uses maxp() internally, which is efficient because it has access to derivatives. It can do this because it considers two distinct linear restrictions: p_i >= v and p_i <= v. The (unique) global likelihood maximum must be in one or other of these spaces. Whichever one it's in, the other space must have a smaller maximum; and further, this maximum must be on the boundary, namely p_i == v. So taking the minimum of the restricted optimum points is the maximum on the boundary.

This technique is not straightforward to generalize to a multivariate constraint such as p_1=v_1, p_2=v_2.

Note that direct implementation of this restriction would change the derivatives: the fillup value would behave differently. Further, a function like samep.test() uses a different objective function for which derivatives are not available.

RobinHankin commented 3 years ago

This issue is conceptually distinct from issue #78, in which all the strengths are known.

RobinHankin commented 3 years ago

OK, but it would be possible to implement specificp.test(volvo2014,c(1,3), c(0.2,0.23)), but using Nelder-Mead, and just take the performance hit (due to not having derivatives). Currently, function samep.test() does not use derivatives either.

RobinHankin commented 3 years ago

The new-style idiom would be specificp.test(volvo,c(AbuDhabi=0.2, Brunel=0.23))