alibaba / MNN

MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
http://www.mnn.zone/
8.73k stars 1.67k forks source link

3D CNN的输入数据格式与分组卷积的模型转换问题 #1484

Closed uikino closed 3 years ago

uikino commented 3 years ago

概要

我在https://github.com/okankop/Efficient-3DCNNs中根据其论文方法去训练了一个模型。其模型是将传统的2D CNN模型mobilenet resnet等修改为可处理视频的网络模型。计划部署至Android平台作为手势识别。

问题

  1. 其输入格式为(N, C, F, H, W)其F代表帧数。那么我该如何处理?将其归为MNN_DATA_FORMAT_UNKNOWN还是自行扩展MNN_DATA_FORMAT_MYFORMAT?
  2. 在使用MNNConvert对其onnx模型进行转换为MNN模型时。其回报group conv3d not support。在阅读其converter代码和文档后。发现其支持Conv算子,但该converter仅支持3D卷积conv3d group仅为1的情况。 那么我应该是自定义算子还是修改相应的代码?为何只支持group仅为1的3D卷积,MNN对于3D CNN的支持程度如何?最后如果需要支持group >= 1的情况,需要多大的工作量?
jxt1234 commented 3 years ago

相关的带 group 的 onnx 发我看一下吧

uikino commented 3 years ago

模型如下 jester.onnx 网络结构

'''ShuffleNetV2 in PyTorch.

See the paper "ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design" for more details.
'''

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from collections import OrderedDict
from torch.nn import init
import math

def conv_bn(inp, oup, stride):
    return nn.Sequential(
        nn.Conv3d(inp, oup, kernel_size=3, stride=stride, padding=(1,1,1), bias=False),
        nn.BatchNorm3d(oup),
        nn.ReLU(inplace=True)
    )

def conv_1x1x1_bn(inp, oup):
    return nn.Sequential(
        nn.Conv3d(inp, oup, 1, 1, 0, bias=False),
        nn.BatchNorm3d(oup),
        nn.ReLU(inplace=True)
    )

def channel_shuffle(x, groups):
    '''Channel shuffle: [N,C,H,W] -> [N,g,C/g,H,W] -> [N,C/g,g,H,w] -> [N,C,H,W]'''
    batchsize, num_channels, depth, height, width = x.data.size()
    channels_per_group = num_channels // groups
    # reshape
    x = x.view(batchsize, groups, 
        channels_per_group, depth, height, width)
    #permute
    x = x.permute(0,2,1,3,4,5).contiguous()
    # flatten
    x = x.view(batchsize, num_channels, depth, height, width)
    return x

class InvertedResidual(nn.Module):
    def __init__(self, inp, oup, stride):
        super(InvertedResidual, self).__init__()
        self.stride = stride
        assert stride in [1, 2]

        oup_inc = oup//2

        if self.stride == 1:
            #assert inp == oup_inc
            self.banch2 = nn.Sequential(
                # pw
                nn.Conv3d(oup_inc, oup_inc, 1, 1, 0, bias=False),
                nn.BatchNorm3d(oup_inc),
                nn.ReLU(inplace=True),
                # dw
                nn.Conv3d(oup_inc, oup_inc, 3, stride, 1, groups=oup_inc, bias=False),
                nn.BatchNorm3d(oup_inc),
                # pw-linear
                nn.Conv3d(oup_inc, oup_inc, 1, 1, 0, bias=False),
                nn.BatchNorm3d(oup_inc),
                nn.ReLU(inplace=True),
            )                
        else:                  
            self.banch1 = nn.Sequential(
                # dw
                nn.Conv3d(inp, inp, 3, stride, 1, groups=inp, bias=False),
                nn.BatchNorm3d(inp),
                # pw-linear
                nn.Conv3d(inp, oup_inc, 1, 1, 0, bias=False),
                nn.BatchNorm3d(oup_inc),
                nn.ReLU(inplace=True),
            )        

            self.banch2 = nn.Sequential(
                # pw
                nn.Conv3d(inp, oup_inc, 1, 1, 0, bias=False),
                nn.BatchNorm3d(oup_inc),
                nn.ReLU(inplace=True),
                # dw
                nn.Conv3d(oup_inc, oup_inc, 3, stride, 1, groups=oup_inc, bias=False),
                nn.BatchNorm3d(oup_inc),
                # pw-linear
                nn.Conv3d(oup_inc, oup_inc, 1, 1, 0, bias=False),
                nn.BatchNorm3d(oup_inc),
                nn.ReLU(inplace=True),
            )

    @staticmethod
    def _concat(x, out):
        # concatenate along channel axis
        return torch.cat((x, out), 1)        

    def forward(self, x):
        if self.stride == 1:
            x1 = x[:, :(x.shape[1]//2), :, :, :]
            x2 = x[:, (x.shape[1]//2):, :, :, :]
            out = self._concat(x1, self.banch2(x2))
        elif self.stride == 2:
            out = self._concat(self.banch1(x), self.banch2(x))

        return channel_shuffle(out, 2)

class ShuffleNetV2(nn.Module):
    def __init__(self, num_classes=600, sample_size=112, width_mult=1.):
        super(ShuffleNetV2, self).__init__()
        assert sample_size % 16 == 0

        self.stage_repeats = [4, 8, 4]
        # index 0 is invalid and should never be called.
        # only used for indexing convenience.
        if width_mult == 0.25:
            self.stage_out_channels = [-1, 24,  32,  64, 128, 1024]
        elif width_mult == 0.5:
            self.stage_out_channels = [-1, 24,  48,  96, 192, 1024]
        elif width_mult == 1.0:
            self.stage_out_channels = [-1, 24, 116, 232, 464, 1024]
        elif width_mult == 1.5:
            self.stage_out_channels = [-1, 24, 176, 352, 704, 1024]
        elif width_mult == 2.0:
            self.stage_out_channels = [-1, 24, 224, 488, 976, 2048]
        else:
            raise ValueError(
                """{} groups is not supported for
                       1x1 Grouped Convolutions""".format(num_groups))

        # building first layer
        input_channel = self.stage_out_channels[1]
        self.conv1 = conv_bn(3, input_channel, stride=(1,2,2))
        self.maxpool = nn.MaxPool3d(kernel_size=3, stride=2, padding=1)

        self.features = []
        # building inverted residual blocks
        for idxstage in range(len(self.stage_repeats)):
            numrepeat = self.stage_repeats[idxstage]
            output_channel = self.stage_out_channels[idxstage+2]
            for i in range(numrepeat):
                stride = 2 if i == 0 else 1
                self.features.append(InvertedResidual(input_channel, output_channel, stride))
                input_channel = output_channel

        # make it nn.Sequential
        self.features = nn.Sequential(*self.features)

        # building last several layers
        self.conv_last      = conv_1x1x1_bn(input_channel, self.stage_out_channels[-1])

        # building classifier
        self.classifier = nn.Sequential(
                            nn.Dropout(0.2),
                            nn.Linear(self.stage_out_channels[-1], num_classes)
                            )

    def forward(self, x):
        out = self.conv1(x)
        out = self.maxpool(out)
        out = self.features(out)
        out = self.conv_last(out)
        out = F.avg_pool3d(out, out.data.size()[-3:])
        out = out.view(out.size(0), -1)
        out = self.classifier(out)
        return out

def get_fine_tuning_parameters(model, ft_portion):
    if ft_portion == "complete":
        return model.parameters()

    elif ft_portion == "last_layer":
        ft_module_names = []
        ft_module_names.append('classifier')

        parameters = []
        for k, v in model.named_parameters():
            for ft_module in ft_module_names:
                if ft_module in k:
                    parameters.append({'params': v})
                    break
            else:
                parameters.append({'params': v, 'lr': 0.0})
        return parameters

    else:
        raise ValueError("Unsupported ft_portion: 'complete' or 'last_layer' expected")

def get_model(**kwargs):
    """
    Returns the model.
    """
    model = ShuffleNetV2(**kwargs)
    return model

if __name__ == "__main__":
    model = get_model(num_classes=600, sample_size=112, width_mult=1.)
    model = model.cuda()
    model = nn.DataParallel(model, device_ids=None)
    print(model)

    input_var = Variable(torch.randn(8, 3, 16, 112, 112))
    output = model(input_var)
    print(output.shape)
maoyichun commented 3 years ago

@uikino 同遇到这个问题,目前有没有其他适合mnn部署的视频分类模型?

YongdongTan commented 3 years ago

@uikino 您好,我现在也遇到了这个问题,请问您是模型转成功了吗?

uikino commented 3 years ago

@maoyichun @YongdongTan 很抱歉,这个问题我并没有解决。

首先,我并没有系统地学习过机器学习,所以我的知识水平有限。

我查阅相关资料,对于视频流的处理在于移动平台并没有特别好的解决方案,尤其是3D CNN类型的。尝试过NCNN、MNN、TNN以及TVM,均无法成功。其绝大多数的移动平台神经网络框架算子支持并不完整,这通常出于多方面的考虑。

所以我有几点建议:

  1. 放弃3D CNN,使用GRU或LTSM来实现
  2. 自己使用BLAS实现一个等价版本的神经网络
  3. 将问题分解,比如我的手势识别可以使用mediapipe的hand landmarks方案获取手势特征点,后面再套一个简单网络进处理
  4. 带上一个ONNX Runtime

这是我随手写下的,如果错误,请多多指出

maoyichun commented 3 years ago

@uikino 感谢,已尝试用CNN+LSTM实现动作识别

barbecacov commented 3 years ago

目前我是用tsm,属于2D conv,来做动作识别任务。转成mnn后耗时情况还可以。

maoyichun commented 3 years ago

@barbecacov 你是用tsm online model做的部署吗?耗时怎么样?

barbecacov commented 3 years ago

@maoyichun online和offline都做了,online就直接是一个正常的mobilenet跑前向,速度很快。offline慢一些

maoyichun commented 3 years ago

@barbecacov 部署在什么平台上跑的?模型效果怎么样?

uikino commented 3 years ago

已改变方向,使用TSM ResNet50