In the original BAPE algorithm paper, Kandasamy+2015 scaled model parameter values between [0,1] using the appropriate simple linear transformation. Performing this scaling in approxposterior could be useful for convergence and numerical stability issues by keeping parameter values in a reasonable range, especially for metric scales.
This can be implemented without too much difficulty using the sklearn preprocessing module, e.g. the MinMaxScaler. Furthermore, the sklearn codebase is well-tested and robust, so it's inclusion shouldn't introduce too many dependency issues.
To do this, I could either use the bounds kwarg that stipulates the hard bounds for model parameters, or I could train the scaler on the GP's initial theta, although I think the former idea is more desirable relative to the latter.
In the original BAPE algorithm paper, Kandasamy+2015 scaled model parameter values between [0,1] using the appropriate simple linear transformation. Performing this scaling in approxposterior could be useful for convergence and numerical stability issues by keeping parameter values in a reasonable range, especially for metric scales.
This can be implemented without too much difficulty using the sklearn preprocessing module, e.g. the MinMaxScaler. Furthermore, the sklearn codebase is well-tested and robust, so it's inclusion shouldn't introduce too many dependency issues.
To do this, I could either use the bounds kwarg that stipulates the hard bounds for model parameters, or I could train the scaler on the GP's initial theta, although I think the former idea is more desirable relative to the latter.