This is also just something I've noticed and creating an issue to take a better look later.
With CGG data, I noticed applying regularization to the betas didn't seem to change anything about the model parameters inferred (SUPER old results here).
I figured it could just be bad luck with CGG, so tried with Tyler's RBD dataset once again (new prep) -- and found that the smallest of penalties seems to stop the model from fitting at all (experiment details here).
Without reg:
With reg:
NOTE: Also trained these models for a relatively short time and provided simple nonlinearities. Though I've noticed a consistent trend of regularization not behaving as I'd expect -- could just be user-error but want to take a better look later.
This is also just something I've noticed and creating an issue to take a better look later.
With CGG data, I noticed applying regularization to the betas didn't seem to change anything about the model parameters inferred (SUPER old results here).
I figured it could just be bad luck with CGG, so tried with Tyler's RBD dataset once again (new prep) -- and found that the smallest of penalties seems to stop the model from fitting at all (experiment details here).
Without reg:
With reg:
NOTE: Also trained these models for a relatively short time and provided simple nonlinearities. Though I've noticed a consistent trend of regularization not behaving as I'd expect -- could just be user-error but want to take a better look later.