ContinualAI / avalanche

Avalanche: an End-to-End Library for Continual Learning based on PyTorch.
http://avalanche.continualai.org
MIT License
1.75k stars 287 forks source link

[BACKWARD INCOMPATIBLE] Explicit return values for strategy methods #660

Open AntonioCarta opened 3 years ago

AntonioCarta commented 3 years ago

Over time, we added new extension points to the BaseStrategy that allow easy customization. This is what the new API looks like (most of it is already available):

class Naive(BaseStrategy):
    def __init__(self, **kwargs):
        super().__init__(**kwargs**)
        pass

    @property
    def mb_x(self):
        """ Current mini-batch input. """
        return self.mbatch[0]

    @property
    def mb_y(self):
        """ Current mini-batch target. """
        return self.mbatch[1]

    @property
    def mb_task_id(self):
        assert len(self.mbatch) >= 3
        return self.mbatch[-1]

    def train_dataset_adaptation(self, **kwargs):
        ...

    def make_train_dataloader(self, num_workers=0, shuffle=True,
                              pin_memory=True, **kwargs):
        ...

    def make_eval_dataloader(self, num_workers=0, pin_memory=True,
                             **kwargs):
        ...

    def make_optimizer(self):
        ...

    def model_adaptation(self):
        ...

    def forward(self):
        ...

    def criterion(self):
        ...

Apart from what has already been done, I want to change methods that create objects (dataloaders, models, forward, ...) to have an explicit return value. Old API:

    def make_train_dataloader(self, **kwargs):
        self.dataloader = TaskBalancedDataLoader(...)

New API:

    def make_train_dataloader(self, **kwargs):
        return TaskBalancedDataLoader(...)

I think explicit return types are slightly better as they make it obvious that these methods should create something. This is backward incompatible but it shouldn't be hard to change old strategies.

AndreaCossu commented 3 years ago

As long as we still assign the returned values to self there should be no modifications to make for the evaluation module.