We find that sometimes emulator training is problematic for a fixed data set, and a small modification leads to massive improvements. More robust handling of the training dataest by e.g. providing more of a Cross validation procedure, or better construction of train/validation splits in the provided points may lead to more robust trainings.
Arising in PR #265 for example,
We find that sometimes emulator training is problematic for a fixed data set, and a small modification leads to massive improvements. More robust handling of the training dataest by e.g. providing more of a Cross validation procedure, or better construction of train/validation splits in the provided points may lead to more robust trainings.