I found that when doing standardization for the design matrix, the code is using degrees of freedom = n for standard deviation. I am wondering why not use n-1, which is commonly used in other standardize function and should give an unbiased estimator for the variance.
Standardization based on root-mean-square is universally used in machine learning. Whether or not there is a bias for the variance is irrelevant, since no variances are being estimated here.
I found that when doing standardization for the design matrix, the code is using degrees of freedom = n for standard deviation. I am wondering why not use n-1, which is commonly used in other standardize function and should give an unbiased estimator for the variance.