Closed olivierlabayle closed 2 years ago
It's a convention on the objective function; the reason is to have the scale of the loss and the penalty be on the same grounds (so that if you have twice as much data, you don't have to change the regularisation) In the case of ridge for instance:
1/n ||y - Xb||^2 + lambda * ||b||^2
then this is equivalent to multiplying by n
.
PS: wait, I'm confused, you made those changes didn't you? I don't think anyone touched that logic since you did.
ah no it's not you it's @jbrea maybe he can chip in if you have further questions.
Note: in any case I think that parameter is best obtained via hyperparameter optimisation.
Thanks for the explanation @tlienart, I think users (like me) will expect that the default behavior of the algorithm, especially as simple as a logistic regression, is to work out of the box and provide a reasonnable fit with the default hyperparameters. This new hyperparameter does seem to mess things up as far as I can see, the output is almost like a random biased coin toss. It would probably make more sense to default as false doesn't it? Moreover this would have been a non breaking change from 0.5.7 if I followed correctly the history.
Please have a look at https://github.com/JuliaAI/MLJLinearModels.jl/issues/108 for the reasoning behind it, specifically the tuning.
I don't think you can expect a default that is not scaled to work well across the board for users. More generally I don't think you can expect a good default for this full stop. These parameters must be tuned and the tuning should not be affected by sample size.
I think users (like me) will expect that the default behavior of the algorithm, especially as simple as a logistic regression, is to work out of the box and provide a reasonnable fit with the default hyperparameters.
I don't think you can expect a default that is not scaled to work well across the board for users. More generally I don't think you can expect a good default for this full stop. These parameters must be tuned and the tuning should not be affected by sample size.
I agree with both. I think scale_penalty_with_samples = true
is the better default. However, currently the default lambda = 1
means rather strong regularisation when the input is (close to) standardised, which may be the most common case (see below). In this case, users usually have to change lambda to avoid underfitting. We could also set the default to lambda = 1e-8
(or some other small value), with the argument that it basically doesn't affect the unregularised solution in the non-separable case while still avoiding runaway solutions in the separable case. Users would usually have to deal with overfitting.
So, if we have good evidence that 1) (close to) standardised input is the most common case and 2) the majority of users perceives (potentially) overfitting as a more reasonable fit than (potentially) underfitting, I would argue for lowering the default lambda
.
If we write the solution of logistic regression as $\theta = \beta \tilde\theta$, where $|\tilde\theta|_2 = 1$, assume perfect separability and uncorrelated predictors with mean 0 and variance 1, therefore roughly $y_i\tilde\theta'x_i \approx 1$, we can find $\beta$ by minimising $\log(1 + \exp(-\beta)) + \frac{\lambda}2\beta^2$. For $\lambda = 1$ the solution is approximately $\beta \approx 0.8$ and therefore the prediction for the correct class approximately $1/(1 + \exp(-\beta)) \approx 0.7$. This looks heavily regularised for a separable problem. For lambda = 1e-8
the prediction for the correct class would be basically 1.
Thanks, I like this suggestion
Also agree, why not completely lambda=0 which is vanilla logistic regression?
why not completely lambda = 0
We could do this. I just don't like too much the fact that, in the separable case, the solution would have infinite norm, $|\theta|_2 = \infty$, which is never reached by any optimiser, obviously. Therefore I would prefer the default to be at least lambda = eps()
.
Thanks both for the discussion, default set to eps()
, patch release under way.
@tlienart This is breaking, no? I think we need a breaking (minor) release not a patch. Or am I missing something?
Ok, would you mind doing it? Thanks!
Thanks @tlienart. I'm making a PR to General to yank 0.6.5 from the registry.
Hi,
I have just inadvertently upgraded to 0.6 and witnessed massive undesirable changes in the output of my program. I think I could nail it down to
MLJLinearModels
and the new hyperparameterscale_penalty_with_samples
. I don't exactly know what this is doing but from the experiment below it does not seem to be a good default by any mean. Could you provide more information on this hyperparameter and motivate the introduction as a default?