wjf5203 / VNext

Next-generation Video instance recognition framework on top of Detectron2 which supports InstMove (CVPR 2023), SeqFormer(ECCV Oral), and IDOL(ECCV Oral))
Apache License 2.0
602 stars 53 forks source link

gpu util = 0 while running inference #26

Closed shulou1996 closed 2 years ago

shulou1996 commented 2 years ago

Hello, appreciate for your great job!

There is a problem when I run your inference script on ytvis dataset python3 projects/IDOL/train_net.py --config-file projects/IDOL/configs/XXX.yaml --num-gpus 8 --eval-only MODEL.WEIGHTS /path to my .pth

Everything works well and GPU memory looks normal, but GPU util is always 0. image

Total infer time is around 2h, I don't know if all gpus are correctly used during inference.

wjf5203 commented 2 years ago

Hi, thanks for your attention~ Typically, inference on YTVIS only takes a few minutes. Two hours is an abnormal time, so maybe you can check whether the cpu utilization is too high during inference. If yes, you can set cfg.MODEL.IDOL.MERGE_ON_CPU=True in config file, which will avoid processing data on cpu and speed up inference.

shulou1996 commented 2 years ago

Yes, the default is True and setting it to False can significantly speed up. Thanks.