yuelinan / DARE

10 stars 0 forks source link

How to enhance the sparsity regularizer #2

Closed jugechengzi closed 7 months ago

jugechengzi commented 7 months ago

Hello, I find that the sparsity on the test set changes very dramatically. If I want to enhance the sparsity regularizer, which hyperparameter in the code should I adjust? Could you give any suggestions?

yuelinan commented 7 months ago

Hi, when I did the experiment before, the average sparsity of the validation set was at 0.07 (if we set alpha to 0.07). I have not checked the sparsity change in the test set. From my experience, you can try to change torch.abs(alpha-0.07) to alpha-0.07. Thank you

jugechengzi commented 7 months ago

Thank you very much for your reply. My find is that when I set "--selection" to 0.15, the sparsity of the test set can be around 0.08。So, changing torch.abs(alpha-0.07) to alpha-0.07 may not help. I want to know which of the following hyperparameters is $\lambda$_2 in your paper. 图片

yuelinan commented 7 months ago

Hi, $\lambda$_2 is a Lagrange multiplier which will be updated during training. We follow this setting from https://github.com/bastings/interpretable_predictions/blob/master/latent_rationale/beer/models/latent.py.

jugechengzi commented 7 months ago

Thany you for your reply!

So, if I want to increase the influence of the term $\lambda2 |L{sh}-l_r|$, which of the above hyperparameters should I adjust, and larger or smaller? I have tried to adjust "--lasso" in [1, 0.5, 0.1, 0.2, 0.02,0.002, 0.0002,0.00002], but all failed. I also tried to adjust "--lagrange_alpha" from 0.5 to 5, but it still failed.

yuelinan commented 7 months ago

Hi, I haven't adjusted the sparsity related hyperparameters much before when I experimented. I think you can add an additional parameter, maybe change self.lambda0.detach() c0 to 2self.lambda0.detach() * c0.

jugechengzi commented 7 months ago

Ok, thany you very much!