PaddlePaddle / PaddleDetection

Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.
Apache License 2.0
12.66k stars 2.87k forks source link

TypeError: Argument 'bb' has incorrect type (expected numpy.ndarray, got list) #3175

Closed fanweiya closed 3 years ago

fanweiya commented 3 years ago

同一批数据使用maskrcnn可以训练,使用solov2报错,2.1和rc2.0版本都是同一个错误

loading annotations into memory...
Done (t=0.01s)
creating index...
index created!
W0526 15:50:47.956619 30358 device_context.cc:362] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 11.1, Runtime API Version: 11.0
W0526 15:50:47.956655 30358 device_context.cc:372] device: 0, cuDNN Version: 8.0.
/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/io.py:2302: UserWarning: This list is not set, Because of Paramerter not found in program. There are: fc_0.b_0 fc_0.w_0
  format(" ".join(unused_para_list)))
loading annotations into memory...
Done (t=0.03s)
creating index...
index created!
W0526 15:50:54.308387 30358 parallel_executor.cc:596] Cannot enable P2P access from 0 to 1
W0526 15:50:54.308430 30358 parallel_executor.cc:596] Cannot enable P2P access from 1 to 0
W0526 15:51:05.095049 30358 fuse_all_reduce_op_pass.cc:79] Find all_reduce operators: 110. To make the speed faster, some all_reduce ops are fused during training, after fusion, the number of all_reduce ops is 60.
/data/PaddleDetection/ppdet/data/reader.py:50: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
  format(f, e, str(stack_info)))
2021-05-26 15:51:07,383 - WARNING - fail to map op [Poly2Mask_6be841] with error: Argument 'bb' has incorrect type (expected numpy.ndarray, got list) and stack:
Traceback (most recent call last):
  File "/data/PaddleDetection/ppdet/data/reader.py", line 46, in __call__
    data = f(data, ctx)
  File "/data/PaddleDetection/ppdet/data/transform/operators.py", line 2676, in __call__
    for gt_poly in sample['gt_poly']
  File "/data/PaddleDetection/ppdet/data/transform/operators.py", line 2676, in <listcomp>
    for gt_poly in sample['gt_poly']
  File "/data/PaddleDetection/ppdet/data/transform/operators.py", line 2659, in _poly2mask
    rles = self.maskutils.frPyObjects(mask_ann, img_h, img_w)
  File "pycocotools/_mask.pyx", line 293, in pycocotools._mask.frPyObjects
TypeError: Argument 'bb' has incorrect type (expected numpy.ndarray, got list)

/data/PaddleDetection/ppdet/data/parallel_map.py:243: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
  .format(sample.errmsg))
2021-05-26 15:51:07,384 - WARNING - recv endsignal from outq with errmsg[consumer[consumer-457-1] failed to map with error:[Argument 'bb' has incorrect type (expected numpy.ndarray, got list)]]
2021-05-26 15:51:12,708 - INFO - iter: 0, lr: 0.000000, 'loss_ins': '2.955467', 'loss_cate': '0.908734', 'loss': '3.864201', eta: 8:59:44, batch_cost: 0.11994 sec, ips: 16.67473 images/sec
2021-05-26 15:51:16,691 - WARNING - recv endsignal from outq with errmsg[consumer[consumer-457-0] exits for reason[consumer[consumer-457-1] failed to map with error:[Argument 'bb' has incorrect type (expected numpy.ndarray, got list)]]]
/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/reader.py:1288: DeprecationWarning: The 'warn' function is deprecated, use 'warning' instead
  logging.warn('Your reader has raised an exception!')
2021-05-26 15:51:16,691 - WARNING - Your reader has raised an exception!
Exception in thread Thread-7:
Traceback (most recent call last):
  File "/data/Anaconda3/envs/pp2/lib/python3.7/threading.py", line 926, in _bootstrap_inner
    self.run()
  File "/data/Anaconda3/envs/pp2/lib/python3.7/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/reader.py", line 1289, in __thread_main__
    six.reraise(*sys.exc_info())
  File "/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/six.py", line 703, in reraise
    raise value
  File "/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/reader.py", line 1269, in __thread_main__
    for tensors in self._tensor_reader():
  File "/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/reader.py", line 1348, in __tensor_reader_impl__
    for slots in paddle_reader():
  File "/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/data_feeder.py", line 550, in __reader_creator__
    for item in reader():
  File "/data/PaddleDetection/ppdet/data/reader.py", line 453, in _reader
    reader.reset()
  File "/data/PaddleDetection/ppdet/data/parallel_map.py", line 259, in reset
    assert not self._exit, "cannot reset for already stopped dataset"
AssertionError: cannot reset for already stopped dataset

Traceback (most recent call last):
  File "tools/train.py", line 399, in <module>
    main()
  File "tools/train.py", line 270, in main
    outs = exe.run(compiled_train_prog, fetch_list=train_values)
  File "/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/executor.py", line 1110, in run
    six.reraise(*sys.exc_info())
  File "/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/six.py", line 703, in reraise
    raise value
  File "/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/executor.py", line 1108, in run
    return_merged=return_merged)
  File "/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/executor.py", line 1251, in _run_impl
    return_merged=return_merged)
  File "/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/executor.py", line 913, in _run_parallel
    tensors = exe.run(fetch_var_names, return_merged)._move_to_list()
SystemError: In user code:

    File "tools/train.py", line 399, in <module>
      main()
    File "tools/train.py", line 135, in main
      feed_vars, train_loader = model.build_inputs(**inputs_def)
    File "/data/PaddleDetection/ppdet/modeling/architectures/solov2.py", line 163, in build_inputs
      iterable=iterable) if use_dataloader else None
    File "/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/reader.py", line 749, in from_generator
      iterable, return_list, drop_last)
    File "/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/reader.py", line 1119, in __init__
      self._init_non_iterable()
    File "/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/reader.py", line 1221, in _init_non_iterable
      attrs={'drop_last': self._drop_last})
    File "/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/framework.py", line 3023, in append_op
      attrs=kwargs.get("attrs", None))
    File "/data/Anaconda3/envs/pp2/lib/python3.7/site-packages/paddle/fluid/framework.py", line 2107, in __init__
      for frame in traceback.extract_stack():

    FatalError: Blocking queue is killed because the data reader raises an exception.
      [Hint: Expected killed_ != true, but received killed_:1 == true:1.] (at /paddle/paddle/fluid/operators/reader/blocking_queue.h:158)
      [operator < read > error]
nemonameless commented 3 years ago

推荐使用PaddlePaddle2.1版本,和PaddleDetection目前默认的分支release/2.1或者develop分支,默认是动态图,原先静态图在static文件夹下。可以再给出详细点的环境或配置细节。

yghstill commented 3 years ago

@fanweiya 由于SOLOv2需要将poly转成mask,和mask-rcnn有些不同。这个报错是由于你的标注文件中segmentation字段只有4维,pycocotools解析成了box,导致转换mask过程报错:https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/_mask.pyx#L292 建议重新标注一下segmentation信息,每个标注最好多于4个点

fanweiya commented 3 years ago

@yghstill 我的标注都是超过4个点的,标注在这train.json

fanweiya commented 3 years ago

@nemonameless 就是最新的,还是不行

TrainReader:
  batch_size: 2
  worker_num: 2
  inputs_def:
    fields: ['image', 'im_id', 'gt_segm']
  dataset:
    !COCODataSet
    dataset_dir: dataset/instance_seg
    anno_path: train.json
    image_dir: JPEGImages
  sample_transforms:
  - !DecodeImage
    to_rgb: true
  - !Poly2Mask {}
  - !ResizeImage
    target_size: 800
    max_size: 1333
    interp: 1
    use_cv2: true
    resize_box: true
  - !RandomFlipImage
    prob: 0.5
  - !NormalizeImage
    is_channel_first: false
    is_scale: true
    mean: [0.485,0.456,0.406]
    std: [0.229, 0.224,0.225]
  - !Permute
    to_bgr: false
    channel_first: true
  batch_transforms:
  - !PadBatch
    pad_to_stride: 32
  - !Gt2Solov2Target
    num_grids: [40, 36, 24, 16, 12]
    scale_ranges: [[1, 96], [48, 192], [96, 384], [192, 768], [384, 2048]]
    coord_sigma: 0.2
  shuffle: True

EvalReader:
  inputs_def:
    fields: ['image', 'im_info', 'im_id']
  dataset:
    !COCODataSet
    image_dir: JPEGImages
    anno_path: val.json
    dataset_dir: dataset/instance_seg
  sample_transforms:
  - !DecodeImage
    to_rgb: true
  - !ResizeImage
    interp: 1
    max_size: 1333
    target_size: 800
    use_cv2: true
  - !NormalizeImage
    is_channel_first: false
    is_scale: true
    mean: [0.485,0.456,0.406]
    std: [0.229, 0.224,0.225]
  - !Permute
    channel_first: true
    to_bgr: false
  batch_transforms:
  - !PadBatch
    pad_to_stride: 32
    use_padded_im_info: false
  # only support batch_size=1 when evaluation
  batch_size: 1
  shuffle: false
  drop_last: false
  drop_empty: false
  worker_num: 2

TestReader:
  inputs_def:
    fields: ['image', 'im_info', 'im_id', 'im_shape']
  dataset:
    !ImageFolder
    anno_path: dataset/instance_seg/val.json
  sample_transforms:
  - !DecodeImage
    to_rgb: true
  - !ResizeImage
    interp: 1
    max_size: 1333
    target_size: 800
    use_cv2: true
  - !NormalizeImage
    is_channel_first: false
    is_scale: true
    mean: [0.485,0.456,0.406]
    std: [0.229, 0.224,0.225]
  - !Permute
    channel_first: true
    to_bgr: false
  batch_transforms:
  - !PadBatch
    pad_to_stride: 32
    use_padded_im_info: false
yghstill commented 3 years ago

@fanweiya 再检查下这个json文件吧,我根据你提供的json文件,利用下面脚本测试:

#Find JSON that gives errors
import json
JSON_LOC="train.json"

#Open JSON
val_json = open(JSON_LOC, "r")
json_object = json.load(val_json)
val_json.close()

for i, instance in enumerate(json_object["annotations"]):
    if len(instance["segmentation"][0]) == 4:
        print("instance number", i, "raises arror:", instance["segmentation"][0])

结果如下:

('instance number', 697, 'raises arror:', [1804.9079754601225, 1321.4723926380368, 1684.049079754601, 1201.2269938650306])
('instance number', 700, 'raises arror:', [933.3333333333334, 2821.4285714285716, 621.4285714285714, 2890.4761904761904])
('instance number', 1179, 'raises arror:', [2896.363636363636, 936.3636363636363, 2918.181818181818, 623.6363636363636])

修改到正确的json文件后再启动训练吧。

fanweiya commented 3 years ago

@yghstill 好的,谢谢提醒,一直没发现