ZFTurbo / Keras-RetinaNet-for-Open-Images-Challenge-2018

Code for 15th place in Kaggle Google AI Open Images - Object Detection Track
MIT License
268 stars 76 forks source link

my predict method i taking too much time even i am using GPU with 8 GB ram why is it taking too much time #20

Open DeveloperRachit opened 4 years ago

DeveloperRachit commented 4 years ago

my predict method i taking too much time even i am using GPU with 8 GB ram why is it taking too much time

i am using retinanet_resnet152_500_classes_0.4991.h5 that pretrained model

ZFTurbo commented 4 years ago

Can you post exact time?

DeveloperRachit commented 4 years ago

yes it's taking 120 sec to predict a images objects

ZFTurbo commented 4 years ago

1) You need to ensure GPU is used 2) Long time is possible for first recognition - because of long model initialization. Try to recognize several images and check how much time it requires for each.

DeveloperRachit commented 4 years ago

i am sure it's using GPU Memory full after using GPU why is it taking too much time

DeveloperRachit commented 4 years ago

for every image it's taking 120 sec i tried many images

ZFTurbo commented 4 years ago

Did you load model before each image? Or use the same?

DeveloperRachit commented 4 years ago

if you want i could show you my code also

DeveloperRachit commented 4 years ago

this is for loading model model_path = "/data/sample-apps/deep_dive_demos/open_images_detection/preprocessing/retinanet_resnet152_lt.h5" model = models.load_model(model_path, backbone_name='resnet152')

DeveloperRachit commented 4 years ago

it's taking toommuch time to load then i am using boxes, scores, labels =model.predict(np.expand_dims(image, axis=0)) it's taking 60sec

so whole time taking for both 120sec

DeveloperRachit commented 4 years ago

yes i am loading before each images actually i am using single single images not for multiple

DeveloperRachit commented 4 years ago

i simple made a userinterface where user will upload image and click onn detect when that person click on detect firstly it will load model and then i will predict for every uploading images is doing same

DeveloperRachit commented 4 years ago

my api after uploading images load model and then predict objects

ZFTurbo commented 4 years ago

That's strange. It shouldn't take more than a second. Which tensorflow and keras version you use?

Did you try resnet101 and resnet50?

DeveloperRachit commented 4 years ago

no i tried resnet152

DeveloperRachit commented 4 years ago

tensorflow-gpu==1.14.0 Keras==2.3.1 i am using these versions

ZFTurbo commented 4 years ago

Looks fine. Do you use model for inference?

Try model based on resnet50 and check timing.

DeveloperRachit commented 4 years ago

i tried on resnet50 but taking 120 sec for it also

DeveloperRachit commented 4 years ago

i am using model for inferecce.

ZFTurbo commented 4 years ago

Sorry don't know what it can be (

Did you try script with inference example? https://github.com/ZFTurbo/Keras-RetinaNet-for-Open-Images-Challenge-2018/blob/master/retinanet_inference_example.py

DeveloperRachit commented 4 years ago

yes i tried

ZFTurbo commented 4 years ago

During this 60 seconds for inference can you check GPU usage?

DeveloperRachit commented 4 years ago

Thu Apr 23 15:11:45 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 410.93 Driver Version: 410.93 CUDA Version: 10.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Quadro P4000 Off | 00000000:81:00.0 Off | N/A | | 53% 55C P0 35W / 105W | 7901MiB / 8119MiB | 11% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 7759 C /usr/bin/python3 5779MiB | | 0 9620 C /usr/local/AV/bin/ffmpeg 137MiB | | 0 10424 C /usr/local/AV/bin/ffmpeg 137MiB | | 0 10428 C /usr/local/AV/bin/ffmpeg 137MiB | | 0 10430 C /usr/local/AV/bin/ffmpeg 137MiB | | 0 10436 C /usr/local/AV/bin/ffmpeg 137MiB | | 0 10438 C /usr/local/AV/bin/ffmpeg 137MiB | | 0 10440 C /usr/local/AV/bin/ffmpeg 137MiB | | 0 10441 C /usr/local/AV/bin/ffmpeg 137MiB | | 0 10442 C /usr/local/AV/bin/ffmpeg 137MiB | | 0 10444 C /usr/local/AV/bin/ffmpeg 137MiB | | 0 10447 C /usr/local/AV/bin/ffmpeg 251MiB | | 0 11329 C /usr/local/AV/bin/ffmpeg 137MiB | | 0 14191 C /usr/local/AV/bin/ffmpeg 204MiB | | 0 25039 C /usr/local/AV/bin/ffmpeg 139MiB | +-----------------------------------------------------------------------------+

DeveloperRachit commented 4 years ago

it's taking 40 sec to load model and rest 25 sec to predict.

DeveloperRachit commented 4 years ago

is there any way to reduce time of model loading either we can do it only once to loading model?

ZFTurbo commented 4 years ago

I don't think it's possible to reduce load model time. But you can load model once and keep it in memory while processing images.

DeveloperRachit commented 4 years ago

i am using my api when i load my model once it give me error when i pass image to predict method ValueError: Tensor Tensor("filtered_detections/map/TensorArrayStack/TensorArrayGatherV3:0", shape=(?, 500, 4), dtype=float32) is not an element of this graph.

DeveloperRachit commented 4 years ago

could you help me how to load model once in memory with tf session ?