facebookresearch / maskrcnn-benchmark

Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.
MIT License
9.3k stars 2.49k forks source link

No ground-truth boxes available for one of the images during training on Objects365 dataset. #805

Open miaoshuyu opened 5 years ago

miaoshuyu commented 5 years ago

Traceback (most recent call last): File "tools/train_net.py", line 186, in main() File "tools/train_net.py", line 179, in main model = train(cfg, args.local_rank, args.distributed) File "tools/train_net.py", line 85, in train arguments, File "/home/msy/project/Maskrcnn_benchmark/maskrcnn-benchmark/maskrcnn_benchmark/engine/trainer.py", line 68, in do_train loss_dict = model(images, targets) File "/home/msy/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(input, kwargs) File "/home/msy/anaconda3/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/amp/_initialize.py", line 194, in new_fwd applier(kwargs, input_caster)) File "/home/msy/project/Maskrcnn_benchmark/maskrcnn-benchmark/maskrcnn_benchmark/modeling/detector/generalized_rcnn.py", line 50, in forward proposals, proposal_losses = self.rpn(images, features, targets) File "/home/msy/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(input, **kwargs) File "/home/msy/project/Maskrcnn_benchmark/maskrcnn-benchmark/maskrcnn_benchmark/modeling/rpn/rpn.py", line 159, in forward return self._forward_train(anchors, objectness, rpn_box_regression, targets) File "/home/msy/project/Maskrcnn_benchmark/maskrcnn-benchmark/maskrcnn_benchmark/modeling/rpn/rpn.py", line 178, in _forward_train anchors, objectness, rpn_box_regression, targets File "/home/msy/project/Maskrcnn_benchmark/maskrcnn-benchmark/maskrcnn_benchmark/modeling/rpn/loss.py", line 105, in call labels, regression_targets = self.prepare_targets(anchors, targets) File "/home/msy/project/Maskrcnn_benchmark/maskrcnn-benchmark/maskrcnn_benchmark/modeling/rpn/loss.py", line 61, in prepare_targets anchors_per_image, targets_per_image, self.copied_fields File "/home/msy/project/Maskrcnn_benchmark/maskrcnn-benchmark/maskrcnn_benchmark/modeling/rpn/loss.py", line 44, in match_targets_to_anchors matched_idxs = self.proposal_matcher(match_quality_matrix) File "/home/msy/project/Maskrcnn_benchmark/maskrcnn-benchmark/maskrcnn_benchmark/modeling/matcher.py", line 57, in call "No ground-truth boxes available for one of the images " ValueError: No ground-truth boxes available for one of the images during training

I got this problem when i train the code on the Objects365 dataset, How can i solve it? Thanks a lot.

changqinyao commented 5 years ago

I also meet this problem in object365. Can you solve it?

miaoshuyu commented 5 years ago

@changqinyao I delete this anno = [obj for obj in anno if obj["iscrowd"] == 0] in coco.py And it can work.

changqinyao commented 5 years ago

@changqinyao I delete this anno = [obj for obj in anno if obj["iscrowd"] == 0] in coco.py And it can work. Why is that

changqinyao commented 5 years ago

@miaoshuyu It can solve, because each image in the coco dataset has boxes even if that “iscrowd=1” is deleted?

miaoshuyu commented 5 years ago

@changqinyao Maybe. I think so.