nwojke / deep_sort

Simple Online Realtime Tracking with a Deep Association Metric
GNU General Public License v3.0
5.27k stars 1.47k forks source link

Same Inference Time on CPU and GPU #226

Closed Ali-Mokhtari closed 3 years ago

Ali-Mokhtari commented 3 years ago

Hello, I am trying to run DeepSort on two different AWS EC2 instances: 1- t2.xlarge: General purpose CPU based instance with 4 vCpus 2- g3s.xlarge: GPU instance with 1 Tesla M60 GPU I use "time" command in ubuntu to find the execution time. I get almost the same execution time for both instances. I used the following command line to execute DeepSort on MOT16-06:

time python deep_sort_app.py --sequence_dir=/home/ubuntu/MOT16/test/MOT16-06/ --detection_file=./resources/detections/MOT16_POI_test/MOT16-06.npy --min_confidence=0.3 --nn_budget=100 --display=False

Why Deepsort performances on CPU and GPU are similar? I thought that it should perform much better on GPU. For example, it takes 0.527 sec when I run DeepSort on CPU for the first 30 frames of MOT16_06 (test) and 0.582 sec when I run the same frames on GPU. Thank you

studentbrad commented 3 years ago

That script does not run on a GPU. There is also no inference happening here. I think you are confused, the features are computed prior to execution.