samsledje / ConPLex

Adapting protein language models and contrastive learning for highly-accurate drug-target interaction prediction.
http://conplex.csail.mit.edu
MIT License
119 stars 32 forks source link

GPU is available in the environment, but no `device` argument is passed to the `Pipeline` object. Model will be on CPU. #37

Open hs756149694 opened 1 month ago

hs756149694 commented 1 month ago

When I run the train.py file, it prompts this error GPU is available in the environment, but no device argument is passed to the Pipeline object. Model will be on CPU. How to solve this problem

samsledje commented 1 month ago

Please post the command you ran and the corresponding stack trace/error, and I can try to help you debug.

hs756149694 commented 1 month ago

Please post the command you ran and the corresponding stack trace/error, and I can try to help you debug.

command: conplex-dti train --run-id TestRun --config config/default_config.yaml

image

samsledje commented 1 month ago

Can you post the specs of the system you are running on? It looks like you are running on Windows, on which ConPLex has not previously been tested. It appears that Pytorch is able to find the cuda:0 GPU, so it is possible you will be able to ignore this error message as the model is loaded on CPU, then later moved to GPU. Do you see GPU utilization (nvidia-smi) when running?

hs756149694 commented 1 month ago

Can you post the specs of the system you are running on? It looks like you are running on Windows, on which ConPLex has not previously been tested. It appears that Pytorch is able to find the cuda:0 GPU, so it is possible you will be able to ignore this error message as the model is loaded on CPU, then later moved to GPU. Do you see GPU utilization (nvidia-smi) when running?

The system I run the code on is Windows. When I was training, I checked the GPU usage and it was very little.

hs756149694 commented 1 month ago

It looks like the pipe parameter is not used and is not instantiated in train.py image

samsledje commented 1 month ago

I think you can safely ignore this error-- the pipeline is not originally on GPU when it is created, but then is moved to GPU when the model is registered with CUDA (line 175 in the screenshot you posted). This warning is likely coming when the pipeline is initially created, in the init. My guess is this is a warning that was either added in more recent versions of transformers, or is Windows specific; either way it should be able to be ignored safely.