Open solegalli opened 1 month ago
Instead of wrapping two transformers within a class, could also be an idea to create a brand new transformer which accomplish mean normalisation?
I understand that this could create some duplicated code, but on the other hand, the new mean normalisation transformer will be easy to understand and to debug.
I'm just thinking :-)
Yes, it's perhaps a better idea.
in mean normalization, we subtract the mean from each value and then divide by the value range. This centres the variables at 0, and scales their values between -1 and 1. It is an alternative to standardization.
sklearn has no transformer to apply mean normalization. but we can combine the standard scaler and the robust scaler to do so. The thing is, that both transformers need to be fit over the raw data, so we can't use them within a pipeline, because the pipeline applies the transformation before the next transformer learns the required parameters.
My idea is to wrap both transformers within a class, so that fit is applied without transform, and then with those parameters, we can transform the data. See for example here: https://github.com/solegalli/Python-Feature-Engineering-Cookbook-Second-Edition/blob/main/ch07-scaling/Recipe-4-mean-normalization.ipynb