AmirSh15 / Compact_SER

36 stars 7 forks source link

About reproduction issue #1

Open SteveTanggithub opened 3 years ago

SteveTanggithub commented 3 years ago

could u plz provide the every single hyperparameters setting in training and models? We are not able to reproduct the results in the paper. Thank you very much!

xlzhou01 commented 11 months ago

I have the same problem with you. I found one error in the author code. I don't know if it's right. Specifically, it is in the train function in the main.py. Each for loop is equivalent to taking the first batch_size sample from the scrambled training set, so I think it ends at the end of the for loop, and it may not have completely traversed the training set.

for pos in pbar:
    selected_idx = np.random.permutation(len(train_graphs))[:args.batch_size]
    batch_graph = [train_graphs[idx] for idx in selected_idx]
    output = model(batch_graph)
    labels = torch.LongTensor([graph.label for graph in batch_graph]).to(device)
    loss = criterion(output, labels)

    # backprop
    if optimizer is not None:
        optimizer.zero_grad()  
        loss.backward()  
        optimizer.step()_