TorchEnsemble-Community / Ensemble-Pytorch

A unified ensemble framework for PyTorch to improve the performance and robustness of your deep learning model.
https://ensemble-pytorch.readthedocs.io
BSD 3-Clause "New" or "Revised" License
1.05k stars 95 forks source link

Votings supported #118

Open London38 opened 2 years ago

London38 commented 2 years ago

hi @xuyxu, I've been taking a look at Voting.py, If I'm not mistaken, only soft voting is implemented. are there any plans for implementing majority/hard, plurality and weighted voting as well?

Thanks in advance

xuyxu commented 2 years ago

Hi @London38, great idea! It would be nice if we could support various sub-type of voting during the inference stage. I will work on it when I get a moment ;-)

London38 commented 2 years ago

Thanks @xuyxu , the source where I read about them is: Ensemble Methods: Foundations and Algorithms. Zhou, Zhi-Hua. Page 71. Here relativly new one: https://machinelearningmastery.com/horizontal-voting-ensemble/: Edit: more info imagen

gardberg commented 1 year ago

Hi, @xuyxu I wanted to give implementing another voting strategy a try, but am a bit unsure of what needs to be changed (I have never contributed before :) ). I started implementing majority voting in the forward method of the VotingClassifier class, but realized the averaging is implemented in some other placed as well, such as the validation part of the training loop. Would you or anyone be able to give me some pointers as to how to structure it? My idea is that you would want to pass a voting_strategy parameter to the constructor of the VotingClassifier. I've pasted a rough idea of it below.

@torchensemble_model_doc(
    """Implementation on the VotingClassifier.""", "model"
)
class VotingClassifier(BaseClassifier):

    def __init__(self, voting_strategy="soft", **kwargs):
        super(VotingClassifier, self).__init__(**kwargs)

        self.voting_strategy = voting_strategy

    @torchensemble_model_doc(
        """Implementation on the data forwarding in VotingClassifier.""",
        "classifier_forward",
    )
    def forward(self, *x):
        # Average over class distributions from all base estimators.

        # output: (n_estimators, batch_size, n_classes)
        outputs = [
            F.softmax(estimator(*x), dim=1) for estimator in self.estimators_
        ]

        # This is where another voting strategy can be implemented.
        if self.voting_strategy == "soft": proba = op.average(outputs)
        elif self.voting_strategy == "hard":
            # Do hard majority voting
            # votes: (batch_size)
            votes = torch.stack(outputs).argmax(dim=2).mode(dim=0)[0]
            # Set the probability of the most voted class to 1
            proba = torch.zeros_like(outputs[0])
            proba.scatter_(1, votes.view(-1, 1), 1)

        # Returns averaged class probabilities for each sample
        # proba shape: (batch_size, n_classes) 
        return proba
xuyxu commented 1 year ago

Hi @LukasGardberg, thanks for your contribution!

Is it enough to add the same code snippet to the _forward function in fit method (Line171)?

Maybe you could open a pull request first, and we can implement this feature request step by step :D

gardberg commented 1 year ago

@xuyxu Cool, sure, I just opened a pull request with a first idea :)