if amount of learning (say, angle between intuitive mapping and IME for perturbation) predicts less overlap between output-null distributions, does cloud still do well?
maybe do anti-pruning where you ignore all points sampled under similar contexts, to show that cloud is not just good because it's habitual
or like a set-difference of distributions predicted by pruning and cloud
if amount of learning (say, angle between intuitive mapping and IME for perturbation) predicts less overlap between output-null distributions, does cloud still do well?
maybe do anti-pruning where you ignore all points sampled under similar contexts, to show that cloud is not just good because it's habitual or like a set-difference of distributions predicted by pruning and cloud