a-r-j / graphein

Protein Graph Library
https://graphein.ai/
MIT License
1.02k stars 129 forks source link

Fix and improve the utilities for PyG #234

Closed anton-bushuiev closed 1 year ago

anton-bushuiev commented 1 year ago

Reference Issues/PRs

No Reference Issues/PRs

What does this implement/fix? Explain your changes

This PR fixes the bugs related to the processing of PyG data.

graphein.ml.conversion.convert_nx_to_pyg
  1. Fix the coordinates bug. Currently, the function creates a data.coords tensor with an extra dimension because coords is also a “graph-level feature”.
  2. Improve to convert everything that is possible to torch.Tensors. Makes further usage much more easier and a resulting data object much more PyG-like.
  3. Extend to multiple edge_indices for multiple edge types.
graphein.ml.visualisation.plot_pyg_data
  1. Fix the arguments for the nested plotly_protein_structure_graph. Currently, it lacks the positional value for node_size_feature and the order of the following arguments is completely broken.
  2. Adapt coords processing to the change number 2 in convert_nx_to_pyg.

What testing did you do to verify the changes in this PR?

Pull Request Checklist

a-r-j commented 1 year ago

Taking another look at this. It looks like the switch from distance matrices of the shape N x N to 1 x N x N is breaking PyG style batching. Do you have any arguments against changing this back?

anton-bushuiev commented 1 year ago

No, I agree that N x N is the right option. However, I did not change this. It is caused by this line which was there before.

sonarcloud[bot] commented 1 year ago

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

a-r-j commented 1 year ago

I've made some changes to the CI/CD in #244 which I expect will resolve the test failures here

a-r-j commented 1 year ago

It looks like we're running into an issue when it comes to collating the distance matrices into a batch.

I suppose we have three options:

  1. flatten distance matrices: (e.g. n x n -> n^2 x 1)
  2. add additional dimension: 1 x n x n which then becomes 1 x max(n) x max(n) where max(n) is the largest number of nodes in the batch and smaller proteins are padded appropriately
  3. Drop it entirely since it's easy to compute in torch.

Do you have any strong opinions?

anton-bushuiev commented 1 year ago

That's a good question. I'll try it and let you know.

anton-bushuiev commented 1 year ago

Hi, @a-r-j!

From my perspective, (1) is much better than (2) but (3) is the best. I would not serialize distance matrices for several reasons:

What do you think about it?

a-r-j commented 1 year ago

Very good points @anton-bushuiev.

I am not sure about not supporting it all. While distance matrices are easy to compute, we can still run into scenarios where there are matrix features we may want to include (e.g. Hbond map). Thus, I propose:

anton-bushuiev commented 1 year ago

I agree with the first two points but I am not sure about sparse format. Working with protein graphs, distance matrices are always dense because physical laws allow only zeros on diagonals. That's why I think simple reshaping would be the best:

# Flatten before writing
data.dist_mat = data.dist_mat.reshape(data.num_nodes * data.num_nodes)
# Restore after reading
data.dist_mat = data.dist_mat.reshape((data.num_nodes, data.num_nodes))

Do I miss the scenarios when they may be sparse? What is Hbond map (can you please send some link to its usage in Graphein)?

sonarcloud[bot] commented 1 year ago

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

rg314 commented 1 year ago

Hi @anton-bushuiev,

Your commit for "Improve convert_nx_to_pyg" added some breaking changes. Edges do not necessarily have a kind unless I'm not understanding something?

https://github.com/a-r-j/graphein/blob/b308a58934c29e145666bf35a073b226c9b2b6d4/graphein/ml/conversion.py#L307

        # Split edge index by edge kind
        kind_strs = np.array(list(map(lambda x: "_".join(x), data["kind"])))
        for kind in set(kind_strs):
            key = f"edge_index_{kind}"
            if key in self.columns:
                mask = kind_strs == kind
                data[key] = edge_index[:, mask]
        if "kind" not in self.columns:
            del data["kind"]