Closed jmhessel closed 6 years ago
yed it id, as the correlation heuristic is developed for that setting in particular (N & B show that it was not possible to derive it in the cross-validation setting).
best, Giorgio
On Thu, Oct 11, 2018 at 9:17 PM Jack Hessel notifications@github.com wrote:
Hi again! I had another question about the assumptions used by baycomp's two_on_single function. In particular, in footnote 2 in the JMLR paper mentions that Nadeau and Bengio's (2003) correction was originally conceived in the setting of random train/test splits (rather than k-fold cross-validation) but this setting is not mentioned elsewhere in the work. Is it acceptable to use two_on_single for random training/test splits (rather than k-fold cross-validation) as well?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/janezd/baycomp/issues/3, or mute the thread https://github.com/notifications/unsubscribe-auth/ARwzxgGCcFfEN9ZduOA6fv6BSaUNU8n7ks5uj5lcgaJpZM4XYMwJ .
okay great -- thanks! just wanted to make sure that baycomp.two_on_single
was safe to use in this case!! :)
Hi again! I had another question about the assumptions used by baycomp's
two_on_single
function. In particular, in footnote 2 in the JMLR paper mentions that Nadeau and Bengio's (2003) correction was originally conceived in the setting of random train/test splits (rather than k-fold cross-validation) but this setting is not mentioned elsewhere in the work. Is it acceptable to usetwo_on_single
for random training/test splits (rather than k-fold cross-validation) as well?