AntonioLonga / PytorchGeometricTutorial

Pytorch Geometric Tutorials
1.03k stars 303 forks source link

Tutorial 1 - Methods in class Net are missing arguments #16

Open stefano-bragaglia opened 1 year ago

stefano-bragaglia commented 1 year ago

Hi,

My background is in Knowledge Representation and Reasoning and I'm just a beginner in Deep Learning on Graphs, so apologies in advance if I say something against the (coding) conventions in this domain.

I started to go through the tutorials, converting each into a torch/geometric script (to get familiar with the engineering aspect), as well as a torch/geometric/lightning script.

I've noticed that the methods from the class "Net" of Tutorial 1 are missing all the parameters required to function. Specifically, they are missing "dataset" (in __init__()) and "data" (in forward()).

The notebook works fine because "dataset" and "data" are defined in a cell above Net, therefore they are hardcoded. I'm not sure sure if this was intended, since it might mislead learners.

As a result, I've changed your code as follows (including the changes discussed above and some cosmetic changes for readability):

import torch.nn as nn
import torch_geometric.nn as gnn

class Net(nn.Module):

    def __init__(self, num_features: int, num_classes: int):
        super(Net, self).__init__()

        assert num_features > 0, f"At least 1 feature required, {num_features} were given" 
        assert num_classes > 0, f"At least 1 class required, {num_classes} were given" 

        self.conv = gnn.SAGEConv(
            num_features,
            num_classes,
            aggr="max",  # max, mean, add ...
        )

    def forward(self, x, edge_index):
        x = self.conv(x, edge_index)
        x = F.log_softmax(x, dim=1)

        return x

Consequently, the model is instantiated as follows:

device = torch.device('cuda' if torch.cuda.is_available() and use_cuda_if_available else 'cpu')

model, data = Net(
    num_features=dataset.num_features, 
    num_classes=dataset.num_classes,
).to(device), data.to(device)

optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)

And the train() and test() methods become as follows:

def train():
    model.train()
    optimizer.zero_grad()
    logits = model(data.x, data.edge_index)
    F.nll_loss(logits[data.train_mask], data.y[data.train_mask]).backward()
    optimizer.step()

def test():
    model.eval()
    logits, accs = model(data.x, data.edge_index), []
    for _, mask in data('train_mask', 'val_mask', 'test_mask'):
        pred = logits[mask].max(1)[1]
        acc = pred.eq(data.y[mask]).sum().item() / mask.sum().item()
        accs.append(acc)

    return accs

Let me know if I stand correct in my thoughts, or if I'm missing some crucial point because I'm not familiar with this domain. Also please let me know if you find this feedback helpful, and I'll keep reporting other things that I notice in the tutorials. Also please let me know if you'd be interested in the Torch Lightning examples as well (so far I've managed to do it, at least!).

Last but not least, please send my greetings to Trento and Val di Non that I miss deeply! šŸ„¹ Ciao!

LakhderAmine99 commented 1 year ago

I guess this is intended just for simplicity.