HannesStark / EquiBind

EquiBind: geometric deep learning for fast predictions of the 3D structure in which a small molecule binds to a protein
MIT License
469 stars 110 forks source link

Add device check before multiligand inference #55

Closed zahidpanj closed 1 year ago

zahidpanj commented 1 year ago

This resolves this bug

[2022-07-26 21:52:43.978184] [ Using Seed :  1  ]
device = cpu
Entering batch ending in index 8/20
Traceback (most recent call last):
  File "multiligand_inference.py", line 275, in <module>
    main()
  File "multiligand_inference.py", line 272, in main
    write_while_inferring(lig_loader, model, args)
  File "multiligand_inference.py", line 216, in write_while_inferring
    lig_graphs = lig_graphs.to(args.device)
  File "/home/miniconda3/envs/equibind/lib/python3.7/site-packages/dgl/heterograph.py", line 5448, in to
    ret._graph = self._graph.copy_to(utils.to_dgl_context(device))
  File "/home/miniconda3/envs/equibind/lib/python3.7/site-packages/dgl/utils/internal.py", line 533, in to_dgl_context
    device_id = F.device_id(ctx)
  File "/home/miniconda3/envs/equibind/lib/python3.7/site-packages/dgl/backend/pytorch/tensor.py", line 88, in device_id
    return 0 if ctx.type == 'cpu' else th.cuda.current_device()
  File "/home/miniconda3/envs/equibind/lib/python3.7/site-packages/torch/cuda/__init__.py", line 479, in current_device
    _lazy_init()
  File "/home/miniconda3/envs/equibind/lib/python3.7/site-packages/torch/cuda/__init__.py", line 208, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

that occurs when running python multiligand_infernce.py -o path/to/output_directory -r path/to/receptor.pdb -l path/to/ligands.sdf

in environment created by

conda env create -f environment_cpuonly.yml

Note

I have not tested on a GPU enabled device with this change.