i trained the model on my own dataset, but when i use the demo to infer the testing data, it occurs out of memory error, after i replace the testing data with training data, the error remains. my gpu is 2080Ti with 11GB memory.
i have tried tuning the INPUT SIZE, but it seems not working.
the error message as blow:
***** META INFO *******
config_file: ./configs/rrpn/e2e_rrpn_R_50_C4_1x_SHIP_test.yaml
result_dir: results/e2e_rrpn_R_50_C4_1x_SHIP_test/model_0150000
image_dir: /home/xxx/data/ship/test/img
weights: ./models/SHIP/model_0150000.pth
---
image: /home/xxx/data/ship/test/img/P0440.png
torch.Size([3, 1000, 1133])
<maskrcnn_benchmark.structures.image_list.ImageList object at 0x7fab5ea6e0f0>
cuda:2
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1544176307774/work/aten/src/THC/THCGeneral.cpp line=405 error=11 : invalid argument
out of memory
out of memory
out of memory
out of memory
out of memory
out of memory
Traceback (most recent call last):
File "demo/RRPN_Demo.py", line 81, in <module>
predictions, bounding_boxes = coco_demo.run_on_opencv_image(img)
File "/home/xxx/project/github/RRPN_pytorch/demo/predictor.py", line 710, in run_on_opencv_image
predictions = self.compute_prediction(image)
File "/home/xxx/project/github/RRPN_pytorch/demo/predictor.py", line 744, in compute_prediction
predictions = self.model(image_list)
File "/home/xxx/.conda/envs/rrpn/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(input, **kwargs)
File "/home/xxx/project/github/maskrcnn-benchmark/maskrcnn_benchmark/modeling/detector/generalized_rrpn_rcnn.py", line 61, in forward
x, result, detector_losses = self.roi_heads(features, proposals, targets)
File "/home/xxx/.conda/envs/rrpn/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(input, kwargs)
File "/home/xxx/project/github/maskrcnn-benchmark/maskrcnn_benchmark/modeling/roi_heads/rroi_heads.py", line 27, in forward
x, detections, loss_box = self.box(features, proposals, targets)
File "/home/xxx/.conda/envs/rrpn/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, kwargs)
File "/home/xxx/project/github/maskrcnn-benchmark/maskrcnn_benchmark/modeling/roi_heads/rbox_head/box_head.py", line 55, in forward
x = self.feature_extractor(features, recur_proposals)
File "/home/xxx/.conda/envs/rrpn/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(input, **kwargs)
File "/home/xxx/project/github/maskrcnn-benchmark/maskrcnn_benchmark/modeling/roi_heads/rbox_head/roi_box_feature_extractors.py", line 42, in forward
x = self.pooler(x, proposals)
File "/home/xxx/.conda/envs/rrpn/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(input, kwargs)
File "/home/xxx/project/github/maskrcnn-benchmark/maskrcnn_benchmark/modeling/poolers.py", line 103, in forward
return self.poolers0
File "/home/xxx/.conda/envs/rrpn/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, kwargs)
File "/home/xxx/project/github/maskrcnn-benchmark/maskrcnn_benchmark/layers/rroi_align.py", line 71, in forward
input, rois_reverse, self.output_size, self.spatial_scale
File "/home/xxx/project/github/maskrcnn-benchmark/maskrcnn_benchmark/layers/rroi_align.py", line 19, in forward
input, roi, spatial_scale, output_size[0], output_size[1]
RuntimeError: cuda runtime error (2) : out of memory at /home/xxx/project/github/maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda/RROIAlign_cuda.cu:236
i trained the model on my own dataset, but when i use the demo to infer the testing data, it occurs out of memory error, after i replace the testing data with training data, the error remains. my gpu is 2080Ti with 11GB memory.
my config as blow:
i have tried tuning the INPUT SIZE, but it seems not working.
the error message as blow: