open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.43k stars 9.43k forks source link

How to choose specific GPU to do inference ? #3830

Closed Adblu closed 4 years ago

Adblu commented 4 years ago

I have 8 GPUs [ 0,1,2,3,4,5,6,7 ], and lets say, I want to do inference of GPUs 3 and 7. How do I do that ?

yuzhj commented 4 years ago

CUDA_VISIBLE_DEVICES=3,7 or you can send the model and data to the specific gpu by .to('cuda:3') or .to('cuda:7')

Adblu commented 4 years ago

Here is my code:


from mmdet.apis import inference_detector, init_detector
from PIL import Image
from tqdm import tqdm

checkpoint = 'experiment_4/epoch_33.pth'
config = 'experiment_4/res2net_config_augumentations_oryginal.py'
score_thr = 0.1

img_name= '/home/n/Documents/mmdetection/images/7.jpg'
model = init_detector(config, checkpoint, device='cpu')
model.CLASSES = ['class']

result = inference_detector(model, img_name)
if hasattr(model, 'module'):
    model = model.module

img = model.show_result(img_name, result, score_thr=score_thr, show=False)

im = Image.fromarray(img)
# im.save(img_name.split('.jpg')[0]+'_inference_.jpg')
im.show()

How to specify that in the code ? I do not want to mess with every variables every minute.

yuzhj commented 4 years ago

model = init_detector(config, checkpoint, device='cuda:3)

Adblu commented 4 years ago

Thanks !