ORNL / HydraGNN

Distributed PyTorch implementation of multi-headed graph convolutional neural networks
BSD 3-Clause "New" or "Revised" License
68 stars 29 forks source link

Flexible architecture naming convention #183

Closed JustinBakerMath closed 1 year ago

JustinBakerMath commented 1 year ago

Changed variable names to alleviate confusion and improve architecture flexibility.

The core of the architecture is this for loop which consists of one pass at the graph_conv and one pass at a feature_layer.

        for conv, feat_layer in zip(self.graph_convs, self.feature_layers):
            c = conv(x=x, **conv_args)
            x = F.relu(feat_layer(c))

Graph convolutions and feature layers are stacked in the _init_conv method. Flexible feature_layers can stack BatchNorm layers (default inherited from Base), be ignored using torch.nn.Identity as in the SCFStack, or do more advanced multi-heading as in GATStack.

If additional flexibility is desired then the F.relu component of the for loop can be integrated into the feature_layer by initializing a Sequential module.