Closed Leonhard1987 closed 6 years ago
TextBoxes++ is a fully convolutional network thus it can handle different input sizes.
Thank you for your fast response!
After running the detection with 'python examples/text/demo.py' I receive the following error, if the resolution is set too high (for instance 1280x720):
F0703 15:27:46.782757 2223 syncedmem.cpp:56] Check failed: error == cudaSuccess (2 vs. 0) out of memory Check failure stack trace: Aborted (core dumped)
Do you know what might be the problem or how to fix it?
Your GPU memory is not enough. You can use nvidia-smi
to check your GPU memory.
yes you are right, after checking nvidia-smi i realized the memory is already at its limits.
However only 1 out of 4 gpus is used. Is there a possibility to use all 4 gpus, such that higher resolutions can be set?
I changed the settings for resolution within the examples/text/demo.py file according to my image file. The resolution was set to 768 x 768: 'input_height' : 768, 'input_width' : 768, The new setting is slightly bigger (according to the picture size)
Now when i start demo.py the detection fails.
Is it possible to adjust the input_height and input_weight? Is textboxes++ capable of handling different resolutions? When looking at the caffe modell i saw an fc layer. I assume this layer is fully connected and therefore the size of the input image has to be fix? Is that correct?
What kind of method is used to resize the original image?