Closed mobeets closed 8 years ago
first, for most dates there is a strong positive trend for deviations from pruning and deviations from observed. this is the first plot below: x-val is how different the hypothesis is from pruning, for a given kinematics condition, and y-val is by how much worse this hypothesis does than pruning. so deviating from pruning here means deviating from observed.
a similar view of this is to have the same x-vals (except i re-fit so they're not identical) but for the y-vals just plot the mean error from observed, without respect to pruning. here again, we see many positive trends across sessions, meaning deviations from the pruning hypothesis result in worse hypotheses.
this also gives you an idea of magnitude: hab/cloud are both closer to pruning than they are to observed, so these hypotheses are all very similar. but the more dissimilar they get, the worse they do at predicting observed.
overall for habitual this trend is stronger than it is for cloud. the one exception to the sign of this relationship is blatant: for 20120709, below, habitual seems clearly better. maybe I need to re-tune pruning's parameters here?
have to re-do this now with the new cloud.
i'd say now that overall things look like this:
a trend for habitual, and no trend for cloud
and again, this holds up for every plot except for 20120709, where the relationship is negative for both hypotheses (since they both do better than pruning here)
Let's take habitual and pruning, for example, and use one to fit the other. Order the kinematics conditions by the mean error, so that the last thetas are the ones where habitual and pruning's latents differ the most in mean. Now, assess their mean error predicting the actual scores.
Plot this difference in mean error vs. the error in predicting one another. This is asking: The more pruning and habitual differ, does this lead to pruning doing better and better at predicting actuals?