-
Similar to the Box-Cox transformation, the `asinh` or pseudolog transformation is a common transformation for reducing skewness and stabilizing variance. It's most often used for variables that are ro…
-
Integrating OnlineStats (its online learning algorithms) and giving it an easy to use hyperparameter tuning context makes Julia even more useful for quick ML on real big data.
-
It would be very nice to have more transformers than Standardizer, OneHotEncoding and BoxCox (and their univariate versions)
I even tried to implement a MinMaxScaler using Standardizer as an exampl…
-
Regarding unsupervised models such as PCA, kmeans, etc discussed in #44.
I know these are commonly encapsulated within the transformer formalism, but it would do the methodology behind them injusti…
-
AFAIK, we use the `n x p` convention where `n` is the number of observations. It seems to me that we should however make the whole machinery able to take transposes (especially after ranting against o…
-
*This is an issue reserved for the [TU Delft Student Software Project '23](https://github.com/orgs/JuliaTrustworthyAI/projects/4)*
[MLJ](https://alan-turing-institute.github.io/MLJ.jl/dev/) is a po…
-
It would be useful, especially to potential contributors, to have a unified description of how public interfaces should be structured. At first glance, I assumed we would be attempting to stay as clo…
-
Hello,
I am doing feature selection using Scikit Learn Recursive Feature Elimination RFE.
This algorithm takes ages on Python. I searched on Julia Observer and couldnt find any equivalent in Julia…
-
A number of feature-reduction strategies only make sense in the context of a supervised learning task because they must consult a target variable when trained. For example, one might wants to drop fea…
-
[[Possibly related to the API discussion on Clustering Models](https://github.com/alan-turing-institute/MLJ.jl/issues/852)]
I am in the process to implement several Missing Imputers in a new BetaML…