Fermes / yolov3-mxnet

A minimal YOLOv3 implementation in MXNet, don't need cfg.
41 stars 12 forks source link

got an unexpected keyword argument 'same_mode' when using yolov3-tiny #5

Open BackT0TheFuture opened 6 years ago

BackT0TheFuture commented 6 years ago

Hi, there thanks for your efforts. recently I came across this exception as bellow wheng running detect.py yolov3-mxnet is latest and mxnet version is 1.3.0 I know it might be the problem of version, but I don't know how to fix it thanks!

Traceback (most recent call last):
  File "detect0.py", line 192, in <module>
    net = TinyDarkNet(input_dim=input_dim, num_classes=num_classes)
  File "E:\yolov3-mxnet\darknet.py", line 308,
 in __init__
    self.max_pool_11 = nn.MaxPool2D(2, 1, same_mode=True)
  File "D:\Anaconda2\lib\site-packages\mxnet-1.3.0-py2.7.egg\mxnet\gluon\nn\conv
_layers.py", line 792, in __init__
    pool_size, strides, padding, ceil_mode, False, 'max', **kwargs)
  File "D:\Anaconda2\lib\site-packages\mxnet-1.3.0-py2.7.egg\mxnet\gluon\nn\conv
_layers.py", line 675, in __init__
    super(_Pooling, self).__init__(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'same_mode'
ngochieu642 commented 5 years ago

I got the same problem too, have you figured out the solution for this ?

ngochieu642 commented 5 years ago

Use this function instead of the original predict_transform

def predict_transform_tiny(prediction, input_dim, anchors):
    ctx = prediction.context
    if not isinstance(anchors, nd.NDArray):
        anchors = nd.array(anchors, ctx=ctx)

    batch_size = prediction.shape[0]
    anchors_masks = [[3, 4, 5], [0, 1, 2]]
    strides = [13, 26]
    step = [(0, 507), (507, 2535)]
    for i in range(2):
        stride = strides[i]
        grid = np.arange(stride)
        a, b = np.meshgrid(grid, grid)
        x_offset = nd.array(a.reshape((-1, 1)), ctx=ctx)
        y_offset = nd.array(b.reshape((-1, 1)), ctx=ctx)
        x_y_offset = \
            nd.repeat(
                nd.expand_dims(
                    nd.repeat(
                        nd.concat(
                            x_offset, y_offset, dim=1), repeats=3, axis=0
                    ).reshape((-1, 2)),
                    0
                ),
                repeats=batch_size, axis=0
            )
        tmp_anchors = \
            nd.repeat(
                nd.expand_dims(
                    nd.repeat(
                        nd.expand_dims(
                            anchors[anchors_masks[i]], 0
                        ),
                        repeats=stride * stride, axis=0
                    ).reshape((-1, 2)),
                    0
                ),
                repeats=batch_size, axis=0
            )

        prediction[:, step[i][0]:step[i][1], :2] += x_y_offset
        prediction[:, step[i][0]:step[i][1], :2] *= (float(input_dim) / stride)
        prediction[:, step[i][0]:step[i][1], 2:4] = \
            nd.exp(prediction[:, step[i][0]:step[i][1], 2:4]) * tmp_anchors

    return prediction

In darknet.py, change [padding of conv_bn_block_12,13 to 0] [change block 11] & [add the following in hybrid_foward right after x = self.max_pool_11(x)]

self.max_pool_11 = nn.MaxPool2D(2,1,padding = 1) 
self.conv_bn_block_12 = ConvBNBlock(1024, 3, 1, 0)
self.conv_bn_block_13 = ConvBNBlock(256, 1, 1, 0)

hybrid_foward(...):
    x = self.max_pool_11(x)
    x = x[:,:,1:,1:]

However, this method could not use along with net.hybridize( ), working on finding another solution