dbolya / yolact

A simple, fully convolutional model for real-time instance segmentation.
MIT License
5.01k stars 1.32k forks source link

where is the ScatterWrapper? #410

Open shenjun1994 opened 4 years ago

shenjun1994 commented 4 years ago

I can't find the ScatterWrapper define in all the files, it used in the main.py,compute_validation_loss function, could you tell me where is the define about the ScatterWrapper?

artes14 commented 3 years ago

I'm having same issue here, there's 1 problem just not getting out of project errors:

Unresolved reference 'ScatterWrapper'

jolly12138 commented 2 years ago

I'm also having same issue here, there's 1 problem just not getting out of project errors:

Unresolved reference 'ScatterWrapper'

GT84 commented 2 years ago

Has anyone found a solution for this? I'm facing the same problem!

GT84 commented 2 years ago

Ok, searching pypi, conda etc. results in, no basic package providing this class. But, inspecting repositories history lead to commit "Optimized training on multiple GPUs."

ScatterWrapperCommit

ScatterWrapperDeletion

The class has been removed/replaced with this commit.

My current solution is adding the content of the "old" code into my current train.py file.

class ScatterWrapper:
    """ Input is any number of lists. This will preserve them through a dataparallel scatter. """

    def __init__(self, *args):
        for arg in args:
            if not isinstance(arg, list):
                print('Warning: ScatterWrapper got input of non-list type.')
        self.args = args
        self.batch_size = len(args[0])

    def make_mask(self):
        out = torch.Tensor(list(range(self.batch_size))).long()
        if args.cuda:
            return out.cuda()
        else:
            return out

    def get_args(self, mask):
        device = mask.device
        mask = [int(x) for x in mask]
        out_args = [[] for _ in self.args]

        for out, arg in zip(out_args, self.args):
            for idx in mask:
                x = arg[idx]
                if isinstance(x, torch.Tensor):
                    x = x.to(device)
                out.append(x)

        return out_args

I didn't check the impacts very detailed yet, but basic training does work without any problems.