WongKinYiu / yolor

implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks (https://arxiv.org/abs/2105.04206)
GNU General Public License v3.0
1.98k stars 524 forks source link

Can model be defined by nn.Sequential or does it need to use nn.ModuleList #285

Open alecda573 opened 1 year ago

alecda573 commented 1 year ago

I am trying to understand what happens in your forward_once method here:

` def forward_once(self, x, augment=False, verbose=False): img_size = x.shape[-2:] # height, width yolo_out, out = [], [] if verbose: print('0', x.shape) str = ''

    # Augment images (inference and test only)
    if augment:  # https://github.com/ultralytics/yolov3/issues/931
        nb = x.shape[0]  # batch size
        s = [0.83, 0.67]  # scales
        x = torch.cat((x,
                       torch_utils.scale_img(x.flip(3), s[0]),  # flip-lr and scale
                       torch_utils.scale_img(x, s[1]),  # scale
                       ), 0)

    for i, module in enumerate(self.module_list):
        name = module.__class__.__name__
        #print(name)
        if name in ['WeightedFeatureFusion', 'FeatureConcat', 'FeatureConcat2', 'FeatureConcat3', 'FeatureConcat_l', 'ScaleChannel', 'ShiftChannel', 'ShiftChannel2D', 'ControlChannel', 'ControlChannel2D', 'AlternateChannel', 'AlternateChannel2D', 'SelectChannel', 'SelectChannel2D', 'ScaleSpatial']:  # sum, concat
            if verbose:
                l = [i - 1] + module.layers  # layers
                sh = [list(x.shape)] + [list(out[i].shape) for i in module.layers]  # shapes
                str = ' >> ' + ' + '.join(['layer %g %s' % x for x in zip(l, sh)])
            x = module(x, out)  # WeightedFeatureFusion(), FeatureConcat()
        elif name in ['ImplicitA', 'ImplicitM', 'ImplicitC', 'Implicit2DA', 'Implicit2DM', 'Implicit2DC']:
            x = module()
        elif name == 'YOLOLayer':
            yolo_out.append(module(x, out))
        elif name == 'JDELayer':
            yolo_out.append(module(x, out))
        else:  # run module directly, i.e. mtype = 'convolutional', 'upsample', 'maxpool', 'batchnorm2d' etc.
            #print(module)
            #print(x.shape)
            x = module(x)

        out.append(x if self.routs[i] else [])
        if verbose:
            print('%g/%g %s -' % (i, len(self.module_list), name), list(x.shape), str)
            str = ''

    if self.training:  # train
        return yolo_out
    elif ONNX_EXPORT:  # export
        x = [torch.cat(x, 0) for x in zip(*yolo_out)]
        return x[0], torch.cat(x[1:3], 1)  # scores, boxes: 3780x80, 3780x4
    else:  # inference or test
        x, p = zip(*yolo_out)  # inference output, training output
        x = torch.cat(x, 1)  # cat yolo outputs
        if augment:  # de-augment results
            x = torch.split(x, nb, dim=0)
            x[1][..., :4] /= s[0]  # scale
            x[1][..., 0] = img_size[1] - x[1][..., 0]  # flip lr
            x[2][..., :4] /= s[1]  # scale
            x = torch.cat(x, 1)
        return x, p`

and why you choose to loop through an nn.ModuleList object in place of an nn.Sequential object. Could this easily support an nn.Sequential object?

Crazylov3 commented 1 year ago

You couldn't use shortcut if you use nn.Sequential :)

alecda573 commented 1 year ago

@Crazylov3 hey thanks for responding so quickly! Can you explain what is the purpose of the shortcut. So there is no way to replicate shortcut by using the sequential class?

Crazylov3 commented 1 year ago

The main purpose of the shortcut (also known as skip connection) in deep neural networks is to help with the flow of information and improve gradient flow during training. In particular, it helps to address the problem of vanishing gradients, which can occur when training very deep neural networks. The idea behind the shortcut is to create a direct connection between the input and output of a block of layers, allowing information to flow directly from one layer to another without having to pass through several intermediate layers. This can help to preserve information and gradients as they propagate through the network, which can lead to more stable and efficient training.

alecda573 commented 1 year ago

@Crazylov3 so it seems in the config files the shortcut appears after two consectutive conv layers, can this be replaced in the config files by Bottleneck block seen in yolov7 repo here: https://github.com/WongKinYiu/yolov7/blob/main/models/common.py and then one could use nn.Sequential inplace of nn.ModuleList?

Crazylov3 commented 1 year ago

In general, you can use shortcut everywhere you want. If you have only 1 configs, it will easy to implement shortcut in a block, then you can put these block into nn.Sequential(). However, in this case, there are a lot of configs file, using nn.Sequential() for all may be hard in implementation term. The purpose of their implementation is resuse the code (1 source code for all configs)