Open RobinHankin opened 4 years ago
Actually it's not as straightforward as I thought. Function specificp.test()
currently uses maxp()
internally, which is efficient because it has access to derivatives. It can do this because it considers two distinct linear restrictions: p_i >= v
and p_i <= v
. The (unique) global likelihood maximum must be in one or other of these spaces. Whichever one it's in, the other space must have a smaller maximum; and further, this maximum must be on the boundary, namely p_i == v
. So taking the minimum of the restricted optimum points is the maximum on the boundary.
This technique is not straightforward to generalize to a multivariate constraint such as p_1=v_1, p_2=v_2
.
Note that direct implementation of this restriction would change the derivatives: the fillup value would behave differently. Further, a function like samep.test()
uses a different objective function for which derivatives are not available.
This issue is conceptually distinct from issue #78, in which all the strengths are known.
OK, but it would be possible to implement specificp.test(volvo2014,c(1,3), c(0.2,0.23))
, but using Nelder-Mead, and just take the performance hit (due to not having derivatives). Currently, function samep.test()
does not use derivatives either.
The new-style idiom would be specificp.test(volvo,c(AbuDhabi=0.2, Brunel=0.23))
As it is, function
specificp.test()
tests the hypothesis that a single strength has a particular value. But something likeshould make sense (testing the hypothesis that
p_1=0.2, p_3=0.23
). Or maybe