dog-qiuqiu / Yolo-Fastest

:zap: Based on yolo's ultra-lightweight universal target detection algorithm, the calculation amount is only 250mflops, the ncnn model size is only 666kb, the Raspberry Pi 3b can run up to 15fps+, and the mobile terminal can run up to 178fps+
Other
2.01k stars 428 forks source link

model convert(weights to h5、pb...) error #31

Open 48313758a opened 4 years ago

48313758a commented 4 years ago

When I convert the weights file to other model files, it always report error: buffer is too small for requested array. I check the .cfg file,then I found there some 'groups' vars in [convolutional] layer, maybe it reads more weights than GroupConv2D needs and make the weights not enough to use, but I don't know how to split groups in yolo-fastest(filters=groups?). Is the error caused by this problem, or others, and how to solve this error?

waittim commented 4 years ago

I tried to convert the Darknet model to be a Keras model. I think the code from repo keras-yolo3 didn't consider the group in Conv2d because it bases on the old version of Keras which doesn't support the group operation. Maybe you can try this code, which is based on the latest TF version, to convert Conv2d layer.

    def conv(self, block):
        '''在读取darknet的yolov3.weights文件时,顺序是
          1 - bias;
          2 - 如果有bn,则接着读取三个scale,mean,var
          3 - 读取权重
        '''
        # Darknet serializes convolutional weights as:
        # [bias/beta, [gamma, mean, variance], conv_weights]
        self.count[0] += 1
        # read conv block
        filters = int(block['filters'])
        size = int(block['size'])
        stride = int(block['stride'])
        pad = int(block['pad'])
        activation = block['activation']
        groups = int(block['groups']) if 'groups' in block else 1

        padding = 'same' if pad == 1 and stride == 1 else 'valid'
        batch_normalize = 'batch_normalize' in block

        prev_layer_shape = K.int_shape(self.prev_layer)
        weights_shape = (size, size, int(prev_layer_shape[-1]/groups), filters)
        darknet_w_shape = (filters, weights_shape[2], size, size)
        weights_size = np.product(weights_shape)

        print('+',self.count[0],'conv2d', 
              'bn' if batch_normalize else ' ',
              activation,
              weights_shape)

        # 读取滤波器个偏置
        conv_bias = self.weight_loader.parser_buffer(
                                 shape=(filters,),
                                 dtype='float32',
                                 buffer_size=filters*4)

        # 如果有bn,则接着读取滤波器个scale,mean,var
        if batch_normalize:
            bn_weight_list = self.bn(filters, conv_bias)

        # 读取权重
        conv_weights = self.weight_loader.parser_buffer(
                              shape=darknet_w_shape,
                              dtype='float32',
                              buffer_size=weights_size*4)
        # DarkNet conv_weights are serialized Caffe-style:
        # (out_dim, in_dim, height, width)
        # We would like to set these to Tensorflow order:
        # (height, width, in_dim, out_dim)

        conv_weights = np.transpose(conv_weights, [2, 3, 1, 0])
        conv_weights = [conv_weights] if batch_normalize else \
                              [conv_weights, conv_bias]

        act_fn = None
        if activation == 'leaky':
            pass
        elif activation != 'linear':
            raise

        if stride > 1:
            self.prev_layer = ZeroPadding2D(((1,0),(1,0)))(self.prev_layer)

        conv_layer = (Conv2D(
                filters, (size, size),
                strides=(stride, stride),
                kernel_regularizer=l2(self.weight_decay),
                use_bias=not batch_normalize,
                weights=conv_weights,
                activation=act_fn,
                groups=groups,
                padding=padding))(self.prev_layer)

        if batch_normalize:
             conv_layer = BatchNormalization(weights=bn_weight_list)(conv_layer)
        self.prev_layer = conv_layer

        if activation == 'linear':
            self.all_layers.append(self.prev_layer)
        elif activation == 'leaky':
            act_layer = LeakyReLU(alpha=0.1)(self.prev_layer)
            self.prev_layer = act_layer
            self.all_layers.append(act_layer)

It dealt with buffer is too small for requested array. However, when I was training the keras model, I met some other errors. So I abandoned this approach. If you can find a usable approach. Please let me know.

Thank you!