nnstreamer / nntrainer

NNtrainer is Software Framework for Training Neural Network Models on Devices.
Apache License 2.0
142 stars 72 forks source link

Add Depthwise 2D Convolution Layer #2520

Open DonghakPark opened 5 months ago

DonghakPark commented 5 months ago

This layer is necessary to support various applications such as SV.

In Tensorflow They support like Below

tf.keras.layers.DepthwiseConv2D(
    kernel_size,
    strides=(1, 1),
    padding='valid',
    depth_multiplier=1,
    data_format=None,
    dilation_rate=(1, 1),
    activation=None,
    use_bias=True,
    depthwise_initializer='glorot_uniform',
    bias_initializer='zeros',
    depthwise_regularizer=None,
    bias_regularizer=None,
    activity_regularizer=None,
    depthwise_constraint=None,
    bias_constraint=None,
    **kwargs
)

In Pytorch They support like

torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)

groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example,

At groups=1, all inputs are convolved to all outputs.

At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.

At groups= in_channels, each input channel is convolved with its own set of filters (of size 
out_channels
in_channels
in_channels
out_channels
​
 ).

I would appreciate if you could provide your opinion on how nntrainer should support Depthwise Convolution.

We currently support Tensorflow Lite Export functionality. Would it be better to write in a way that is friendly to Tensorflow Lite?

Alternatively, would it be better to support it in a similar format to PyTorch?

taos-ci commented 5 months ago

:octocat: cibot: Thank you for posting issue #2520. The person in charge will reply soon.