Closed mrinal-shekhar closed 1 year ago
Hi @mrinal-shekhar,
I got the same error when I was trying to call python multiligand_inference.py
DGL backend not selected or invalid. Assuming PyTorch for now.
Setting the default backend to "pytorch". You can change it in the ~/.dgl/config.json file or export the DGLBACKEND environment variable. Valid options are: pytorch, mxnet, tensorflow (all lowercase)
[2022-07-20 11:40:09.969125] [ Using Seed : 1 ]
device = cpu
Entering batch ending in index 8/667
Traceback (most recent call last):
File "multiligand_inference.py", line 275, in <module>
main()
File "multiligand_inference.py", line 272, in main
write_while_inferring(lig_loader, model, args)
File "multiligand_inference.py", line 216, in write_while_inferring
lig_graphs = lig_graphs.to(args.device)
File "/home/ken/anaconda3/envs/equibind/lib/python3.7/site-packages/dgl/heterograph.py", line 5448, in to
ret._graph = self._graph.copy_to(utils.to_dgl_context(device))
File "/home/ken/anaconda3/envs/equibind/lib/python3.7/site-packages/dgl/utils/internal.py", line 533, in to_dgl_context
device_id = F.device_id(ctx)
File "/home/ken/anaconda3/envs/equibind/lib/python3.7/site-packages/dgl/backend/pytorch/tensor.py", line 90, in device_id
return 0 if ctx.type == 'cpu' else th.cuda.current_device()
File "/home/ken/anaconda3/envs/equibind/lib/python3.7/site-packages/torch/cuda/__init__.py", line 479, in current_device
_lazy_init()
File "/home/ken/anaconda3/envs/equibind/lib/python3.7/site-packages/torch/cuda/__init__.py", line 208, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Could it be that you're using an incompatible CUDA version?
If you've used the environment.yml
file that was provided, you'll need to have CUDA10.2 installed. I haven't tested with different versions of CUDA (e.g. 11.6), but I assume you'll need to install different versions of cudatoolkit and torch to get things to work.
@niksubramanian
I am using CUDA11.7. I will try to modify the environment.yml
file to make it work with CUDA11.7.
Thanks, Ken
Hi I have tried to install equibind and create an environment for both CUDA enabled and CPU machines using environment.yml and environment_cpu.yml. I am trying to run a test example using the following command after activating the equibind environment python multiligand_inference.py -o ./test_output/ -r ./test_input/protein.pdb -l ./test_input/ligand.sdf However, i am getting the same error for both CUDA and CPU-enabled installations:
device = cpu Entering batch ending in index 8/18 Traceback (most recent call last): File "multiligand_inference.py", line 275, in
main()
File "multiligand_inference.py", line 272, in main
write_while_inferring(lig_loader, model, args)
File "multiligand_inference.py", line 216, in write_while_inferring
lig_graphs = lig_graphs.to(args.device)
File "/Users/mshekhar/miniconda3/envs/equibind/lib/python3.7/site-packages/dgl/heterograph.py", line 5448, in to
ret._graph = self._graph.copy_to(utils.to_dgl_context(device))
File "/Users/mshekhar/miniconda3/envs/equibind/lib/python3.7/site-packages/dgl/utils/internal.py", line 533, in to_dgl_context
device_id = F.device_id(ctx)
File "/Users/mshekhar/miniconda3/envs/equibind/lib/python3.7/site-packages/dgl/backend/pytorch/tensor.py", line 90, in device_id
return 0 if ctx.type == 'cpu' else th.cuda.current_device()
File "/Users/mshekhar/miniconda3/envs/equibind/lib/python3.7/site-packages/torch/cuda/init.py", line 479, in current_device
_lazy_init()
File "/Users/mshekhar/miniconda3/envs/equibind/lib/python3.7/site-packages/torch/cuda/init.py", line 208, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Please help Regards Mrinal