As far as Figure 5 goes, the pattern is similar for all models, an eventual shift to the mean, except for WAPLS. WAPLS has much higher error, so I'm going to drop the 'free_y' on scales for the plotting, but all the same, it seems much more stable than the others, but again, that might just be scaling.
Figure 6 (bias, or squared error). It's a bit harder to figure out what's going on here. The models all look fairly similar (except WA, I'm fixing that, it was a bug in the code). rFor does better than MAT here, and they all do better than WAPLS, but some of the problem with WAPLS seems to be associated with a couple of outliers.
Figure 7 (variance). Again with the crazy outliers in WAPLS, but MAT also has very high variability in the bootstrap runs and a strange pattern, that makes it look like there are a set of points for which variance pops up and goes down at every other run size. The variance drops suddenly at ~0.8, which I assume is a discontinuity associated with the complete exclusion of certain points. Variability drops, but error (bias) increases continuously.
As far as Figure 5 goes, the pattern is similar for all models, an eventual shift to the mean, except for WAPLS. WAPLS has much higher error, so I'm going to drop the 'free_y' on scales for the plotting, but all the same, it seems much more stable than the others, but again, that might just be scaling.
Figure 6 (bias, or squared error). It's a bit harder to figure out what's going on here. The models all look fairly similar (except WA, I'm fixing that, it was a bug in the code). rFor does better than MAT here, and they all do better than WAPLS, but some of the problem with WAPLS seems to be associated with a couple of outliers.
Figure 7 (variance). Again with the crazy outliers in WAPLS, but MAT also has very high variability in the bootstrap runs and a strange pattern, that makes it look like there are a set of points for which variance pops up and goes down at every other run size. The variance drops suddenly at ~0.8, which I assume is a discontinuity associated with the complete exclusion of certain points. Variability drops, but error (bias) increases continuously.