Closed datacubeR closed 1 year ago
Finally, in the yeojohson example, could we remove the scaler and the wrapper? just showcase the transformer in question.
Sure thing. I added Scaler in a pipeline because with no scaling the YeoJohnson transform tends to return giant values that seems wrong (but they are not). I looked into the sklearn docs and their transformer have an standardized = True
flag to deal with this, so I thought it could be a good idea to make results comparable.
Merging #646 (af3aab8) into main (c2f6152) will not change coverage. The diff coverage is
n/a
.
@@ Coverage Diff @@
## main #646 +/- ##
=======================================
Coverage 97.91% 97.91%
=======================================
Files 100 100
Lines 3748 3748
Branches 726 726
=======================================
Hits 3670 3670
Misses 29 29
Partials 49 49
Impacted Files | Coverage Δ | |
---|---|---|
feature_engine/transformation/arcsin.py | 100.00% <ø> (ø) |
|
feature_engine/transformation/boxcox.py | 100.00% <ø> (ø) |
|
feature_engine/transformation/log.py | 92.94% <ø> (ø) |
|
feature_engine/transformation/power.py | 100.00% <ø> (ø) |
|
feature_engine/transformation/reciprocal.py | 100.00% <ø> (ø) |
|
feature_engine/transformation/yeojohnson.py | 100.00% <ø> (ø) |
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
Adding examples for: