Closed gjamesgoenawan closed 1 year ago
Thanks for the question.
I will use the environment vars: CUDA_VISIBLE_DEVICES=1
if I want to specify a GPU, rather than the "cuda:id".
Seems there are more problems related to the GPU assignment.
@SlongLiu Thank you for your response!
@tingxueronghua I ended up using torch.cuda.set_device(rank)
to assign a model instance to the appropriate GPU.
Hi! Thank you for your amazing work. I am doing zero-shot inference on custom dataset, however I came across a problem.
I've realized that you implemented a
device
argument ingroundingdino.models.util.inference.load_model
function. Below is the code snippet:I understand that the device is passed to the
build_model
function via args. However I cannot find any reference that indicated the loading into the device.I did some experiment where i created 2 instances of the model, one with
.to('cuda:0')
. I infered 1 image and recorded the time taken. Here's a pseudo code that I used:The time taken for
model
to finsih is 3.7 sec, whilemodel_cuda
only took 0.4.=Furthermore the following codes also confirmed that none of the model parameters are being transfered to cuda:
Additionally, I've tried moving it to my second GPU by setting
device = torch.device("cuda:1")
However, I encountered more error, this time stating that illegal memory has been accessed. Here's the traceback:So am I missing something, or is the model is running on CPU despite me passing the device arguments? Furthermore, have you encountered this on a multi-GPU setup?