akhilpm / DroneDetectron2

Pytorch code for our CVPRw 2023 paper "Cascaded Zoom-in Detector for High Resolution Aerial Images"
MIT License
52 stars 7 forks source link

Hello, every time I run the code, there will be a broken pipe error. What is the reason? #13

Closed zuikeaideren closed 1 year ago

zuikeaideren commented 1 year ago

[07/19 10:41:47] fvcore.common.checkpoint INFO: Saving checkpoint to outputs_FPN_VisDrone/model_0009999.pth [07/19 10:41:49] d2.data.common INFO: Serializing the dataset using: <class 'detectron2.data.common._TorchSerializedList'> [07/19 10:41:49] d2.data.common INFO: Serializing 548 elements to byte tensors and concatenating them all ... [07/19 10:41:49] d2.data.common INFO: Serialized dataset takes 1.44 MiB [07/19 10:41:49] d2.data.dataset_mapper INFO: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(1200, 1200), max_size=1999, sample_style='choice')] [07/19 10:41:49] d2.evaluation.evaluator INFO: Start inference on 548 batches [07/19 10:41:51] d2.evaluation.evaluator INFO: Inference done 11/548. Dataloading: 0.0517 s/iter. Inference: 0.0492 s/iter. Eval: 0.0005 s/iter. Total: 0.1013 s/iter. ETA=0:00:54 [07/19 10:41:56] d2.evaluation.evaluator INFO: Inference done 71/548. Dataloading: 0.0457 s/iter. Inference: 0.0400 s/iter. Eval: 0.0004 s/iter. Total: 0.0861 s/iter. ETA=0:00:41 [07/19 10:42:01] d2.evaluation.evaluator INFO: Inference done 130/548. Dataloading: 0.0457 s/iter. Inference: 0.0394 s/iter. Eval: 0.0004 s/iter. Total: 0.0856 s/iter. ETA=0:00:35 [07/19 10:42:06] d2.evaluation.evaluator INFO: Inference done 191/548. Dataloading: 0.0452 s/iter. Inference: 0.0391 s/iter. Eval: 0.0004 s/iter. Total: 0.0848 s/iter. ETA=0:00:30 [07/19 10:42:11] d2.evaluation.evaluator INFO: Inference done 245/548. Dataloading: 0.0468 s/iter. Inference: 0.0395 s/iter. Eval: 0.0004 s/iter. Total: 0.0868 s/iter. ETA=0:00:26 [07/19 10:42:16] d2.evaluation.evaluator INFO: Inference done 297/548. Dataloading: 0.0481 s/iter. Inference: 0.0402 s/iter. Eval: 0.0004 s/iter. Total: 0.0888 s/iter. ETA=0:00:22 [07/19 10:42:21] d2.evaluation.evaluator INFO: Inference done 348/548. Dataloading: 0.0487 s/iter. Inference: 0.0407 s/iter. Eval: 0.0007 s/iter. Total: 0.0902 s/iter. ETA=0:00:18 [07/19 10:42:26] d2.evaluation.evaluator INFO: Inference done 399/548. Dataloading: 0.0492 s/iter. Inference: 0.0413 s/iter. Eval: 0.0007 s/iter. Total: 0.0912 s/iter. ETA=0:00:13 [07/19 10:42:31] d2.evaluation.evaluator INFO: Inference done 452/548. Dataloading: 0.0495 s/iter. Inference: 0.0416 s/iter. Eval: 0.0007 s/iter. Total: 0.0918 s/iter. ETA=0:00:08 [07/19 10:42:36] d2.evaluation.evaluator INFO: Inference done 503/548. Dataloading: 0.0498 s/iter. Inference: 0.0417 s/iter. Eval: 0.0010 s/iter. Total: 0.0925 s/iter. ETA=0:00:04 [07/19 10:42:40] d2.evaluation.evaluator INFO: Total inference time: 0:00:50.314425 (0.092660 s / iter per device, on 1 devices) [07/19 10:42:40] d2.evaluation.evaluator INFO: Total inference pure compute time: 0:00:22 (0.041902 s / iter per device, on 1 devices) [07/19 10:42:41] d2.evaluation.coco_evaluation INFO: Preparing results for COCO format ... [07/19 10:42:41] d2.evaluation.coco_evaluation INFO: Saving results to outputs_FPN_VisDrone/inference/coco_instances_results.json [07/19 10:42:42] d2.evaluation.coco_evaluation INFO: Evaluating predictions with unofficial COCO API... [07/19 10:42:42] d2.engine.train_loop ERROR: Exception during training: Traceback (most recent call last): File "/root/detectron2/detectron2/engine/train_loop.py", line 156, in train self.after_step() File "/root/detectron2/detectron2/engine/train_loop.py", line 190, in after_step h.after_step() File "/root/detectron2/detectron2/engine/hooks.py", line 556, in after_step self._do_eval() File "/root/detectron2/detectron2/engine/hooks.py", line 529, in _do_eval results = self._func() File "/root/data/yhj/detectron2/croptrain/engine/trainer.py", line 238, in test_and_save_results self._last_eval_results = self.test(self.cfg, self.model) File "/root/detectron2/detectron2/engine/defaults.py", line 617, in test results_i = inference_on_dataset(model, data_loader, evaluator) File "/root/detectron2/detectron2/evaluation/evaluator.py", line 204, in inference_on_dataset results = evaluator.evaluate() File "/root/detectron2/detectron2/evaluation/coco_evaluation.py", line 206, in evaluate self._eval_predictions(predictions, img_ids=img_ids) File "/root/detectron2/detectron2/evaluation/coco_evaluation.py", line 266, in _eval_predictions _evaluate_predictions_on_coco( File "/root/detectron2/detectron2/evaluation/coco_evaluation.py", line 590, in _evaluate_predictions_on_coco coco_dt = coco_gt.loadRes(coco_results) File "/root/data/anaconda3/envs/Yan_detectron2/lib/python3.8/site-packages/pycocotools/coco.py", line 316, in loadRes print('Loading and preparing results...') BrokenPipeError: [Errno 32] Broken pipe

akhilpm commented 1 year ago

Well, this is probably because you don't use the number of workers properly according to your scenario. My code uses default 2 workers, maybe you are not changing that default according to your situation.

https://github.com/pytorch/pytorch/issues/2341 These issues are not related to the code, you have to learn to use Pytoch properly.