first of all, many thanks for this great implementation of the Retinanet Detector.
I have the following problem: I use the train.py script for training. The training runs without problems, but the evaluation takes forever, sometimes several hours. The GPU load during the evaluation drops to zero, and there is no increased CPU usage.
The part where the command line shows "Parsing Annotations" is running fast, the GPU is used to full capacity. But after reading the annotations the GPU load drops to zero and the waiting starts. The same behaviour can be seen when I use bin\evaluate.py.
In my validation data I have 3000 images, some with hundreds of objects, so I don't know if this behaviour is normal?
I have set 500 detections as maximum value in filter_detections.
I use Tensorflow 2.1.0, Keras 2.3.1, Cuda 10.1. My GPU is a Quadro RTX 4000.
This issue has been automatically marked as stale due to the lack of recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hello,
first of all, many thanks for this great implementation of the Retinanet Detector. I have the following problem: I use the train.py script for training. The training runs without problems, but the evaluation takes forever, sometimes several hours. The GPU load during the evaluation drops to zero, and there is no increased CPU usage. The part where the command line shows "Parsing Annotations" is running fast, the GPU is used to full capacity. But after reading the annotations the GPU load drops to zero and the waiting starts. The same behaviour can be seen when I use bin\evaluate.py. In my validation data I have 3000 images, some with hundreds of objects, so I don't know if this behaviour is normal? I have set 500 detections as maximum value in filter_detections.
I use Tensorflow 2.1.0, Keras 2.3.1, Cuda 10.1. My GPU is a Quadro RTX 4000.
Thanks for your help!