Closed cthoyt closed 2 years ago
Merging #84 (081a117) into main (040fc7b) will decrease coverage by
0.83%
. The diff coverage is58.06%
.
@@ Coverage Diff @@
## main #84 +/- ##
==========================================
- Coverage 95.84% 95.00% -0.84%
==========================================
Files 32 33 +1
Lines 1419 1441 +22
==========================================
+ Hits 1360 1369 +9
- Misses 59 72 +13
Impacted Files | Coverage Δ | |
---|---|---|
chemicalx/compat.py | 38.09% <38.09%> (ø) |
|
chemicalx/data/batchgenerator.py | 98.30% <100.00%> (ø) |
|
chemicalx/data/drugfeatureset.py | 100.00% <100.00%> (ø) |
|
chemicalx/data/drugpairbatch.py | 100.00% <100.00%> (ø) |
|
chemicalx/models/deepdds.py | 97.43% <100.00%> (ø) |
|
chemicalx/models/deepdrug.py | 96.77% <100.00%> (ø) |
|
chemicalx/models/epgcnds.py | 100.00% <100.00%> (ø) |
|
chemicalx/models/gcnbmp.py | 97.61% <100.00%> (ø) |
|
chemicalx/models/mrgnn.py | 100.00% <100.00%> (ø) |
|
chemicalx/models/ssiddi.py | 98.73% <100.00%> (ø) |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 040fc7b...081a117. Read the comment docs.
A GPU equipped dedicated runner could work, let us look into this.
@benedekrozemberczki I just added a unit test that should demonstrate that the transfer to GPU works (but it only runs if a GPU is available, so the tests will pass on the current runner regardless)
Good news! I tested the following on a google colab notebook with GPU embedded:
!python -m pip install torch
!python -m pip install "torch-scatter>=2.0.8"
!python -m pip install git+https://github.com/cthoyt/chemicalx.git@packed-graph-compat
import torch.cuda
from torchdrug.data import Molecule
from chemicalx.compat import Graph, PackedGraph
assert torch.cuda.is_available(), "can not test compatibility layer without a GPU available"
gpu_device = torch.device("cuda")
structures = ["C(=O)O", "CCO"]
molecules = [Molecule.from_smiles(smiles) for smiles in structures]
packed_graph = Graph.pack(molecules)
assert isinstance(packed_graph, PackedGraph)
assert "cpu" == packed_graph.edge_list.device.type
assert -1 == packed_graph.edge_list.get_device(), "Device should be -1 to represent it's on the CPU"
gpu_packed_graph = packed_graph.to(gpu_device)
assert "cuda" == gpu_packed_graph.edge_list.device.type
assert 0 <= gpu_packed_graph.edge_list.get_device(), "Device should 0 or higher for a cuda device"
assert packed_graph is not gpu_packed_graph, "The PackedGraph.cuda() creates a new object. There is no in-place versions of this, unfortunately"
so I am confident that this PR does what's intended.
@cthoyt this is amazing!
@cthoyt Feel free to merge this!
Summary
This PR adds a compatibility layer for the packed graph class from torch drug while we're waiting for an upstream fix (https://github.com/DeepGraphLearning/torchdrug/pull/70). This layer makes sure there's a
to()
function that takes a device, as we usually expect torch stuff to do.Changes
chemicalx.compat
torchdrug.data.PackedGraph
that implements theto()
functiontorchdrug.data.Graph
that uses the monkey patched PackedGraph when called inchemicalx.data.drugfeatureset