Open SteveTanggithub opened 3 years ago
I have the same problem with you. I found one error in the author code. I don't know if it's right. Specifically, it is in the train function in the main.py. Each for loop is equivalent to taking the first batch_size sample from the scrambled training set, so I think it ends at the end of the for loop, and it may not have completely traversed the training set.
for pos in pbar:
selected_idx = np.random.permutation(len(train_graphs))[:args.batch_size]
batch_graph = [train_graphs[idx] for idx in selected_idx]
output = model(batch_graph)
labels = torch.LongTensor([graph.label for graph in batch_graph]).to(device)
loss = criterion(output, labels)
# backprop
if optimizer is not None:
optimizer.zero_grad()
loss.backward()
optimizer.step()_
could u plz provide the every single hyperparameters setting in training and models? We are not able to reproduct the results in the paper. Thank you very much!