from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
# Fit the scaler to the data and transform it
X_scaled = scaler.fit_transform(X)
X_scaled.shape
Prec_u_sub = fit_precision_cholesky(X_scaled, graph_u_sub, verbose_level=5)
In particular, this is experienced on e.g. TOP_VOLANTIS on Drogon. So it is a real issue.
The differences are in numerical stability for the optimization, and consequently large timing differences.
Remedy:
scale the data with the StandardScaler and then rescale either data or precision appropriately.
Likely:
There can be a large difference between
and
In particular, this is experienced on e.g.
TOP_VOLANTIS
on Drogon. So it is a real issue. The differences are in numerical stability for the optimization, and consequently large timing differences.Remedy: scale the data with the
StandardScaler
and then rescale either data or precision appropriately. Likely:fit_transform
datainverse_transform
the updated data