as I understand info-metrics, it aims to model noisy processes using insufficient data while avoiding unduly constraining assumptions (and thus preserve statistical robustness.) How does this approach balance this goal with dangers of intractability imposed by assumption-free (or -light) modeling?
as I understand info-metrics, it aims to model noisy processes using insufficient data while avoiding unduly constraining assumptions (and thus preserve statistical robustness.) How does this approach balance this goal with dangers of intractability imposed by assumption-free (or -light) modeling?