Closed ANaaim closed 9 months ago
Hi,
I have personnally never batch processed with marker augmentation so I have not experienced this issue.
It seems quite similar to what's tackled in this post: https://stackoverflow.com/questions/51228131/model-inference-running-time-increases-after-repeated-inferences
The answer being:
You're still seeing growth in running time because you're still calling model more than once in a sess. You just reduced the frequency with which you added nodes to the graph. What you need to do is create a new session for each model you want to build, and close each session when you're done with it.
Would you have time to try editing markerAugmentation.py and see if it makes a difference?
Awesome, that was fast! Instead of importing keras, I'm thinking of doing "tf.keras.backend.clear_session()" Thanks for finding the solution!
Hi,
I have beginning to use the marker augmentation.
When doing large batch processing, at the beginning the time to do a marker augmentation is around 10 to 15 second but as my batch processing is taking more and more time ==> It might be due to some memory issue on the GPU as when i am stopping the python process everything go back to normal.
Did it ever happen to someone ?
Best regards,