Open Alaya-in-Matrix opened 4 years ago
The visible_to_opt
is the score that the optimizer sees (e.g, CV error in a ML hyper-parameter tuning context). Whereas generalization
is meant to be a related metric the optimizer does not get to see (e.g, error on held-out test set in a ML hyper-parameter tuning context).
Does that make sense? If so, a note can be added to the docs.
Thanks for your explanation! I think would be nice if they are also documented
I found no explanation of the
visible_to_opt
andgeneralization
in the documentation, are they some kind of linear transformation to the normalized mean score?