Open ravinmg opened 2 years ago
same here.
Is there any solutions for this?, I keep getting this messsage module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:2 when trying to use it in a different GPU than the default one and cant seem to find a way to do multiple
Hi, I have included a parameter to set which device to use.
In detection.py inside get_detector(), net = torch.nn.DataParallel(net).to(device) This line needs to be changed to net = torch.nn.DataParallel(net, device_ids=[0]).to(device)
Similarly in recognition.py inside get_recognizer(), model = torch.nn.DataParallel(net).to(device) This line needs to be changed to model = torch.nn.DataParallel(net, device_ids=[0]).to(device)
to use 2nd GPU, set device_ids as [1]. By default it will identify your GPU IDs as [0, 1] and starts to use first value in the index. So this needs to be mentioned explicitly.
Create a way to pass divice_ids as a parameter when you initialise Reader from easyocr. I have used a variable name gpu_id to pass device_ids value
So my object looks like this reader_obj_0 = easyocr.Reader(['en'], model_storage_directory=model_path, gpu="cuda:0", gpu_id=[0]) reader_obj_1 = easyocr.Reader(['en'], model_storage_directory=model_path, gpu="cuda:1", gpu_id=[1])
This works for me.
Hi folks I am running easyocr multiple gpus but it is using only one gpu any suggestion how can i use multiple gpu for inference