Closed PeterSwiss closed 2 years ago
Hi, thanks for your question.
In our comparison DGL baseline, we do not consider any additional batched normalization operation on the feature embedding matrix, as shown in our DGL baseline for the GCN model. https://github.com/YukeWang96/OSDI21_AE/blob/6ea6a211248faba7637025e2269423b42ee923ac/dgl_baseline/gcn.py#L6-L29
Here is the reference to graphconv
forward function implementation.
https://github.com/dmlc/dgl/blob/a7b5085a5d88bd90a29e5acef929f6278ddc9528/python/dgl/nn/pytorch/conv/graphconv.py#L337-L455
I see, thanks.
I notice that the baseline models (directly imported from DGL) include some operations such as BatchNorm, while your models in the codebase do not. So do your models produce the correct output as in the original GNN paper? Is that the reason for the shorter latency than DGL in the paper?