VirtuosoResearch / Regularized-Self-Labeling

A regularized self-labeling approach to improve the generalization and robustness of fine-tuned models
https://arxiv.org/abs/2111.04578
MIT License
27 stars 1 forks source link

Baseline (L2-SP and L-PGM) implementation in your paper. #1

Open kai-wen-yang opened 1 year ago

kai-wen-yang commented 1 year ago

The official code of these two methods are all tensorflow based which I can not use. I have tried to implement them using pytorch by my own, but the accuracies are worse than direct fine-tuning. Could you please help me to provide the implementation of L2-SP and L-PGM in Table 2 of your paper? Or can you provide the reference code you use?

lidongyue12138 commented 1 year ago

Hi Kaiwen,

To use L2-SP, you can specify the --reg_method as penalty in the script train_constraint.py. Then, you can use --reg_extractor and --reg_predictor to specify the weights of the penalties when combining them with the loss.

To use L2-PGM, you can specify the --reg_method as constraint. Then, you can use --reg_extractor and --reg_predictor to specify the L2 distances of the constraints.

As an example of the training process, take a look at the ConstraintTrainer within the train_constraint.py script.