JuliaAI / MLJLinearModels.jl

Generalized Linear Regressions Models (penalized regressions, robust regressions, ...)
MIT License
81 stars 13 forks source link

What is the new scale_penalty_with_samples=true doing? #124

Closed olivierlabayle closed 2 years ago

olivierlabayle commented 2 years ago

Hi,

I have just inadvertently upgraded to 0.6 and witnessed massive undesirable changes in the output of my program. I think I could nail it down to MLJLinearModels and the new hyperparameter scale_penalty_with_samples. I don't exactly know what this is doing but from the experiment below it does not seem to be a good default by any mean. Could you provide more information on this hyperparameter and motivate the introduction as a default?

using MLJLinearModels
using MLJBase
using StableRNGs
using Distributions

rng = StableRNG(123)
# scale_penalty_with_samples=false turns back to the 0.5.7 results
model = LogisticClassifier(fit_intercept=false)
logit(X, θ) = 1 ./ (1 .+ exp.(-X*θ))

N = 1000
P = 3
θ = [-1, 3, -10]
X = rand(rng, N, P)
μ_y = logit(X, θ)
y = zeros(N)
for index in eachindex(μ_y)
    y[index] = rand(rng, Bernoulli(μ_y[index]))
end

mach = machine(model, MLJBase.table(X), categorical(y))
fit!(mach)
fitted_params(mach)
ypred = MLJBase.predict(mach, X)
mean_log_loss = mean(log_loss(ypred, y))

## Output version 0.6.3:

# (classes = CategoricalArrays.CategoricalValue{Float64, UInt32}[0.0, 1.0],
#  coefs = [:x1 => -0.1548456157597273, :x2 => -0.13252104591451244, :x3 => -0.19765403186822833],
#  intercept = nothing,)

# logloss mean: 0.6043495080176984

# predict 5 first
# UnivariateFinite{Multiclass{2}}(0.0=>0.558, 1.0=>0.442)
# UnivariateFinite{Multiclass{2}}(0.0=>0.556, 1.0=>0.444)
# UnivariateFinite{Multiclass{2}}(0.0=>0.566, 1.0=>0.434)
# UnivariateFinite{Multiclass{2}}(0.0=>0.507, 1.0=>0.493)
# UnivariateFinite{Multiclass{2}}(0.0=>0.532, 1.0=>0.468)

## Output version 0.5.7:

# (classes = CategoricalArrays.CategoricalValue{Float64, UInt32}[0.0, 1.0],
#  coefs = [:x1 => -1.2849403317718433, :x2 => 1.978169995235155, :x3 => -6.365523473187024],
#  intercept = nothing,)

# logloss mean: 0.2351991376364725

# predict 5 first
#  UnivariateFinite{Multiclass{2}}(0.0=>0.998, 1.0=>0.00238)
#  UnivariateFinite{Multiclass{2}}(0.0=>0.9, 1.0=>0.1)
#  UnivariateFinite{Multiclass{2}}(0.0=>0.857, 1.0=>0.143)
#  UnivariateFinite{Multiclass{2}}(0.0=>0.642, 1.0=>0.358)
#  UnivariateFinite{Multiclass{2}}(0.0=>0.63, 1.0=>0.37)
tlienart commented 2 years ago

It's a convention on the objective function; the reason is to have the scale of the loss and the penalty be on the same grounds (so that if you have twice as much data, you don't have to change the regularisation) In the case of ridge for instance:

1/n ||y - Xb||^2 + lambda * ||b||^2

then this is equivalent to multiplying by n.

tlienart commented 2 years ago

PS: wait, I'm confused, you made those changes didn't you? I don't think anyone touched that logic since you did.

ah no it's not you it's @jbrea maybe he can chip in if you have further questions.

Note: in any case I think that parameter is best obtained via hyperparameter optimisation.

olivierlabayle commented 2 years ago

Thanks for the explanation @tlienart, I think users (like me) will expect that the default behavior of the algorithm, especially as simple as a logistic regression, is to work out of the box and provide a reasonnable fit with the default hyperparameters. This new hyperparameter does seem to mess things up as far as I can see, the output is almost like a random biased coin toss. It would probably make more sense to default as false doesn't it? Moreover this would have been a non breaking change from 0.5.7 if I followed correctly the history.

tlienart commented 2 years ago

Please have a look at https://github.com/JuliaAI/MLJLinearModels.jl/issues/108 for the reasoning behind it, specifically the tuning.

I don't think you can expect a default that is not scaled to work well across the board for users. More generally I don't think you can expect a good default for this full stop. These parameters must be tuned and the tuning should not be affected by sample size.

jbrea commented 2 years ago

I think users (like me) will expect that the default behavior of the algorithm, especially as simple as a logistic regression, is to work out of the box and provide a reasonnable fit with the default hyperparameters.

I don't think you can expect a default that is not scaled to work well across the board for users. More generally I don't think you can expect a good default for this full stop. These parameters must be tuned and the tuning should not be affected by sample size.

I agree with both. I think scale_penalty_with_samples = true is the better default. However, currently the default lambda = 1 means rather strong regularisation when the input is (close to) standardised, which may be the most common case (see below). In this case, users usually have to change lambda to avoid underfitting. We could also set the default to lambda = 1e-8 (or some other small value), with the argument that it basically doesn't affect the unregularised solution in the non-separable case while still avoiding runaway solutions in the separable case. Users would usually have to deal with overfitting.

So, if we have good evidence that 1) (close to) standardised input is the most common case and 2) the majority of users perceives (potentially) overfitting as a more reasonable fit than (potentially) underfitting, I would argue for lowering the default lambda.


If we write the solution of logistic regression as $\theta = \beta \tilde\theta$, where $|\tilde\theta|_2 = 1$, assume perfect separability and uncorrelated predictors with mean 0 and variance 1, therefore roughly $y_i\tilde\theta'x_i \approx 1$, we can find $\beta$ by minimising $\log(1 + \exp(-\beta)) + \frac{\lambda}2\beta^2$. For $\lambda = 1$ the solution is approximately $\beta \approx 0.8$ and therefore the prediction for the correct class approximately $1/(1 + \exp(-\beta)) \approx 0.7$. This looks heavily regularised for a separable problem. For lambda = 1e-8 the prediction for the correct class would be basically 1.

tlienart commented 2 years ago

Thanks, I like this suggestion

olivierlabayle commented 2 years ago

Also agree, why not completely lambda=0 which is vanilla logistic regression?

jbrea commented 2 years ago

why not completely lambda = 0

We could do this. I just don't like too much the fact that, in the separable case, the solution would have infinite norm, $|\theta|_2 = \infty$, which is never reached by any optimiser, obviously. Therefore I would prefer the default to be at least lambda = eps().

tlienart commented 2 years ago

Thanks both for the discussion, default set to eps(), patch release under way.

ablaom commented 2 years ago

@tlienart This is breaking, no? I think we need a breaking (minor) release not a patch. Or am I missing something?

tlienart commented 2 years ago

Ok, would you mind doing it? Thanks!

Done! https://github.com/JuliaAI/MLJLinearModels.jl/commit/5bb7c6de3cfba78eeaab2e1efe851e8c4ef1fd4e#commitcomment-80390857

ablaom commented 2 years ago

Thanks @tlienart. I'm making a PR to General to yank 0.6.5 from the registry.