Optimization-AI / LibAUC

LibAUC: A Deep Learning Library for X-Risk Optimization
https://libauc.org/
MIT License
273 stars 37 forks source link

Dose AUCMLoss be sensitive to hyper-params? #26

Closed hehyuan closed 1 year ago

hehyuan commented 1 year ago

In my experiments, I use AUCMLoss implemented by LibAUC, while its performance seems not good.

[19:35:26] Epoch:001 Train AUC: 0.688; Validate AUC: 0.676; Test AUC: 0.685                                                                         Trainer.py:159
[19:35:28] Epoch:002 Train AUC: 0.689; Validate AUC: 0.678; Test AUC: 0.687                                                                         Trainer.py:159
[19:35:30] Epoch:003 Train AUC: 0.690; Validate AUC: 0.679; Test AUC: 0.688                                                                         Trainer.py:159
[19:35:32] Epoch:004 Train AUC: 0.691; Validate AUC: 0.680; Test AUC: 0.689                                                                         Trainer.py:159
[19:35:34] Epoch:005 Train AUC: 0.692; Validate AUC: 0.681; Test AUC: 0.690                                                                         Trainer.py:159
[19:35:36] Epoch:006 Train AUC: 0.693; Validate AUC: 0.683; Test AUC: 0.691                                                                         Trainer.py:159
[19:35:38] Epoch:007 Train AUC: 0.694; Validate AUC: 0.684; Test AUC: 0.692                                                                         Trainer.py:159
[19:35:41] Epoch:008 Train AUC: 0.695; Validate AUC: 0.685; Test AUC: 0.692                                                                         Trainer.py:159
[19:35:43] Epoch:009 Train AUC: 0.695; Validate AUC: 0.685; Test AUC: 0.693                                                                         Trainer.py:159
[19:35:45] Epoch:010 Train AUC: 0.696; Validate AUC: 0.686; Test AUC: 0.694                                                                         Trainer.py:159
[19:35:47] Epoch:011 Train AUC: 0.696; Validate AUC: 0.686; Test AUC: 0.694                                                                         Trainer.py:159
[19:35:49] Epoch:012 Train AUC: 0.696; Validate AUC: 0.686; Test AUC: 0.694                                                                         Trainer.py:159
[19:35:51] Epoch:013 Train AUC: 0.696; Validate AUC: 0.687; Test AUC: 0.694                                                                         Trainer.py:159
[19:35:53] Epoch:014 Train AUC: 0.696; Validate AUC: 0.687; Test AUC: 0.694                                                                         Trainer.py:159
[19:35:55] Epoch:015 Train AUC: 0.696; Validate AUC: 0.687; Test AUC: 0.694                                                                         Trainer.py:159
[19:35:57] Epoch:016 Train AUC: 0.696; Validate AUC: 0.687; Test AUC: 0.694                                                                         Trainer.py:159
[19:35:59] Epoch:017 Train AUC: 0.696; Validate AUC: 0.687; Test AUC: 0.694                                                                         Trainer.py:159
[19:36:01] Epoch:018 Train AUC: 0.696; Validate AUC: 0.687; Test AUC: 0.694                                                                         Trainer.py:159
[19:36:04] Epoch:019 Train AUC: 0.696; Validate AUC: 0.687; Test AUC: 0.694                                                                         Trainer.py:159
[19:36:06] Epoch:020 Train AUC: 0.696; Validate AUC: 0.687; Test AUC: 0.694                                                                         Trainer.py:159
[19:36:08] Epoch:021 Train AUC: 0.696; Validate AUC: 0.687; Test AUC: 0.694                                                                         Trainer.py:159
[19:36:10] Epoch:022 Train AUC: 0.696; Validate AUC: 0.687; Test AUC: 0.694                                                                         Trainer.py:159
[19:36:13] Epoch:023 Train AUC: 0.696; Validate AUC: 0.687; Test AUC: 0.694                                                                         Trainer.py:159

This log was produced when I use AUCMLoss and PESG optimizer at a MNIST Dataset with ResNet18 and my training process code are

if self.method == "dam":
            from libauc.losses import AUCMLoss
            from libauc.optimizers import PESG

            self.dam_loss = AUCMLoss()
            self.dam_optimizer = PESG(
                model=self.mi.model,
                loss_fn=self.dam_loss,
                momentum=0.9,
                lr=0.1,
                margin=1,
                epoch_decay=0.003,
                weight_decay=1e-4,
                verbose=False,
            )
            self.dam_dataloader = DataLoader(self.di.train_raw, batch_size=self.batch_size, shuffle=True)
...
def dam_twoset_epoch_training_step(self):
        total_risk = 0.0

        for x, y in self.dam_dataloader:
            x, y = x.cuda(), y.cuda()
            logits = self.mi(x)
            preds = torch.sigmoid(logits)
            self.dam_optimizer.zero_grad()
            risk = self.dam_loss(preds, y)
            risk.backward()
            self.dam_optimizer.step()
            total_risk += risk.detach().cpu()

        return total_risk / len(self.dam_dataloader)

I am wondering if I need to tune it's hyper-params or check other things?

yzhuoning commented 1 year ago

Please try our latest version to see if there is still an issue. Thanks!