Open canerozer opened 6 years ago
@dontgetdown I meet this problem too,have you found solution for it?
Not yet except for restarting the tensorflow session again after seeing the warnings about 1400 iterations.
On Thu, Mar 1, 2018, 8:20 AM AllenZhou notifications@github.com wrote:
@dontgetdown https://github.com/dontgetdown I meet this problem too,have you found solution for it?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/matterport/Mask_RCNN/issues/240#issuecomment-369477038, or mute the thread https://github.com/notifications/unsubscribe-auth/AJAjCE7qKzsu9V5ifpKm736VSEOaFpsyks5tZ4UhgaJpZM4R9bBZ .
Today I checked the relevant code again and I will soon try to make different implementation than how I did. I will also try executing the code on different computers, possibly tomorrow.
@dontgetdown Thank you
Made some research for the insights of the problem.
The out of memory issue occurs when I run the graph by using the section.
results = model.run_graph(... , ("refined_anchors_clipped", model.ancestor(pillar, "ROI/refined_anchors_clipped:0"))
As I saw that this problem does not occur during training, I can say that the issue is caused by model.ancestor function.
Also one more thing. When I set the batch size 4 for inference, it is not possible to get the output for 4 of the batches for the nodes in ProposalLayer.
rpn_class shape: (4, 261888, 2) min: 0.00000 max: 1.00000 float32 rpn_bbox shape: (4, 261888, 4) min: -7.51282 max: 25.38355 float32 refined_anchors_clipped shape: (1, 6000, 4) min: 0.00000 max: 1.00000 float32
TensorFlow Version: 1.5.0 Keras Version: 2.1.2 Python version: 3.5.2 GPU: NVIDIA 1080 GTX
Hello,
I am trying to extract the region of interest output of various datasets with the given model file, however after around a thousand iterations, I receive some errors. I don't think it is an issue caused by the limitations of my GPU, since this is only a forward pass task. I have to restart evaluating afterwards and same thing happens again. There are no problems with receiving the final output with the same datasets.
This was the draft code that I wrote.
First, the program is beginning showing these warnings:
Then the warnings change to:
And finally, I receive this error.
In essence, my purpose is to evaluate multiple series of images more than a single image and to solve the corresponding problem, I have tried defining the allocator type as BFC as a suggestion. I also evaluate with a batch size of 1. However, I think that there might be an issue with the garbage collection. Does anyone of you have a suggestion to solve this problem?
Best regards,