Following conventions for the objective functions in the ElasticNet are used:
sklearn: of = 1/2N C + \alpha \mu P_1 + 0.5 \alpha (1-\mu) P_2
McConaughy: of = C + \lambda \rho P_1 + \lambda(1-\rho) P_2
here, C is the (squared) two norm of the residuals and P_1 and P_2 are the regularization term.
McConaughy probably also means a factor of 1/N in front of the C, otherwise the amount of regularization would scale with the number of features which doesn't make any sense.
Assuming this factor, the following formulae should be applied when mapping the regularization parameters from sparseregs interface to that of sklearn.
Following conventions for the objective functions in the ElasticNet are used: sklearn:
of = 1/2N C + \alpha \mu P_1 + 0.5 \alpha (1-\mu) P_2
McConaughy:
of = C + \lambda \rho P_1 + \lambda(1-\rho) P_2
here, C is the (squared) two norm of the residuals and P_1 and P_2 are the regularization term. McConaughy probably also means a factor of 1/N in front of the C, otherwise the amount of regularization would scale with the number of features which doesn't make any sense.
Assuming this factor, the following formulae should be applied when mapping the regularization parameters from sparseregs interface to that of sklearn.