I tried to train Mask RCNN using my own dataset, and I have already configured the environment and run through the official Voc code, but I am unable to run my own dataset. I originally converted the txt format into an XML format similar to VOC in Roboflow.
After comparing the official VOC dataset, I found that there is an additional 0, and I don't know if this is the problem.
This is an example from the XML format in my dataset
`
This is error
/home/asus/miniconda3/envs/rcnn/bin/python /home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train.py Namespace(amp=False, aspect_ratio_group_factor=3, batch_size=2, data_path='', device='cuda:1', epochs=26, lr=0.004, lr_gamma=0.1, lr_steps=[16, 22], momentum=0.9, num_classes=6, output_dir='./save_weights', pretrain=True, resume='', start_epoch=0, weight_decay=0.0001) Using cuda device training. INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000256_jpg.rf.ab91e2b57ecad4b64d9a84b0b5700b48.xml, skip this annotation file. INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_001391_jpg.rf.96d5321ab049c025d7fa8f65d7d30684.xml, skip this annotation file. INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000170_jpg.rf.190d0cf1c18f88b7cc0e37f616fb25e2.xml, skip this annotation file. INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000256_jpg.rf.e173a96af0d22482bbba8089cca73a4f.xml, skip this annotation file. Using [0, 0.5, 0.6299605249474366, 0.7937005259840997, 1.0, 1.2599210498948732, 1.5874010519681994, 2.0, inf] as bins for aspect ratio quantization Count of instances per bin: [4150] Using 2 dataloader workers INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_001134_jpg.rf.51e0a6c595b78bce2d912dfd6ceaf882.xml, skip this annotation file. INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000256_jpg.rf.4911be1b87dec91644f29aa503d5d777.xml, skip this annotation file. INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_001134_jpg.rf.404195a7aa8cf62f9742f7e4c0da359c.xml, skip this annotation file. INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000218_jpg.rf.112dd0036ba94a1cc2d52ff0d6e22ed5.xml, skip this annotation file. _IncompatibleKeys(missing_keys=[], unexpected_keys=['fc.weight', 'fc.bias']) _IncompatibleKeys(missing_keys=['roi_heads.box_predictor.cls_score.weight', 'roi_heads.box_predictor.cls_score.bias', 'roi_heads.box_predictor.bbox_pred.weight', 'roi_heads.box_predictor.bbox_pred.bias', 'roi_heads.mask_predictor.mask_fcn_logits.weight', 'roi_heads.mask_predictor.mask_fcn_logits.bias'], unexpected_keys=[]) /home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] Traceback (most recent call last): File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train.py", line 240, in <module> main(args) File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train.py", line 139, in main mean_loss, lr = utils.train_one_epoch(model, optimizer, train_data_loader, File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train_utils/train_eval_utils.py", line 32, in train_one_epoch loss_dict = model(images, targets) File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/network_files/faster_rcnn_framework.py", line 94, in forward detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets) File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/network_files/roi_head.py", line 548, in forward gt_masks = [t["masks"] for t in targets] File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/network_files/roi_head.py", line 548, in <listcomp> gt_masks = [t["masks"] for t in targets] KeyError: 'masks'
I tried to train Mask RCNN using my own dataset, and I have already configured the environment and run through the official Voc code, but I am unable to run my own dataset. I originally converted the txt format into an XML format similar to VOC in Roboflow. After comparing the official VOC dataset, I found that there is an additional 0, and I don't know if this is the problem. This is an example from the XML format in my dataset `
`
This is error
/home/asus/miniconda3/envs/rcnn/bin/python /home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train.py Namespace(amp=False, aspect_ratio_group_factor=3, batch_size=2, data_path='', device='cuda:1', epochs=26, lr=0.004, lr_gamma=0.1, lr_steps=[16, 22], momentum=0.9, num_classes=6, output_dir='./save_weights', pretrain=True, resume='', start_epoch=0, weight_decay=0.0001) Using cuda device training. INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000256_jpg.rf.ab91e2b57ecad4b64d9a84b0b5700b48.xml, skip this annotation file. INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_001391_jpg.rf.96d5321ab049c025d7fa8f65d7d30684.xml, skip this annotation file. INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000170_jpg.rf.190d0cf1c18f88b7cc0e37f616fb25e2.xml, skip this annotation file. INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000256_jpg.rf.e173a96af0d22482bbba8089cca73a4f.xml, skip this annotation file. Using [0, 0.5, 0.6299605249474366, 0.7937005259840997, 1.0, 1.2599210498948732, 1.5874010519681994, 2.0, inf] as bins for aspect ratio quantization Count of instances per bin: [4150] Using 2 dataloader workers INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_001134_jpg.rf.51e0a6c595b78bce2d912dfd6ceaf882.xml, skip this annotation file. INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000256_jpg.rf.4911be1b87dec91644f29aa503d5d777.xml, skip this annotation file. INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_001134_jpg.rf.404195a7aa8cf62f9742f7e4c0da359c.xml, skip this annotation file. INFO: no objects in VOCdevkit/VOC2012/Annotations/China_Drone_000218_jpg.rf.112dd0036ba94a1cc2d52ff0d6e22ed5.xml, skip this annotation file. _IncompatibleKeys(missing_keys=[], unexpected_keys=['fc.weight', 'fc.bias']) _IncompatibleKeys(missing_keys=['roi_heads.box_predictor.cls_score.weight', 'roi_heads.box_predictor.cls_score.bias', 'roi_heads.box_predictor.bbox_pred.weight', 'roi_heads.box_predictor.bbox_pred.bias', 'roi_heads.mask_predictor.mask_fcn_logits.weight', 'roi_heads.mask_predictor.mask_fcn_logits.bias'], unexpected_keys=[]) /home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] Traceback (most recent call last): File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train.py", line 240, in <module> main(args) File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train.py", line 139, in main mean_loss, lr = utils.train_one_epoch(model, optimizer, train_data_loader, File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/train_utils/train_eval_utils.py", line 32, in train_one_epoch loss_dict = model(images, targets) File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/network_files/faster_rcnn_framework.py", line 94, in forward detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets) File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/asus/miniconda3/envs/rcnn/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/network_files/roi_head.py", line 548, in forward gt_masks = [t["masks"] for t in targets] File "/home/asus/Desktop/wjl-project/Mask_RCNN/mask_rcnn/network_files/roi_head.py", line 548, in <listcomp> gt_masks = [t["masks"] for t in targets] KeyError: 'masks'