Huntersxsx / MGPN

source code of our MGPN in SIGIR 2022
18 stars 1 forks source link

Problems on reproducing the scores in the paper on ActivityNet. #4

Open h-somehow opened 1 year ago

h-somehow commented 1 year ago

Dear @Huntersxsx, Thanks for your interesting work.

I have achieved similar results on Charades-sta and Tacos. However, I encountered a problem with ActivityNet.

"UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). ""

To address this issue, I made the following code modification:

# state['scheduler'].step()
if state['epoch'] > 0:
    state['scheduler'].step()

However, the obtained results are as follows:

R@1,IoU@0..5 = 46.1 (47.92 in paper)
R@1,IoU@0..7 = 29.34 (30.47 in paper)
R@5,IoU@0..5 = 76.26 (78.15 in paper)
R@5,IoU@0..7 = 63.11 (63.56 in paper)

I have already set torch.backends.cudnn.deterministic = False and cudnn.benchmark = True, and I have tried many times, but the best results obtained were the ones mentioned above. Even if I ignore the warning, the performance gap still exists.

I used 4 GPUs and set the training batch size to 64 on ActivityNet. Is there anything else I should change in the code?

Looking forward to your reply.

akzeycgdn265 commented 1 year ago

Dear @Huntersxsx, Thanks for your interesting work.

I have achieved similar results on Charades-sta and Tacos. However, I encountered a problem with ActivityNet.

"UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). ""

To address this issue, I made the following code modification:

# state['scheduler'].step()
if state['epoch'] > 0:
    state['scheduler'].step()

However, the obtained results are as follows:

R@1,IoU@0..5 = 46.1 (47.92 in paper)
R@1,IoU@0..7 = 29.34 (30.47 in paper)
R@5,IoU@0..5 = 76.26 (78.15 in paper)
R@5,IoU@0..7 = 63.11 (63.56 in paper)

I have already set torch.backends.cudnn.deterministic = False and cudnn.benchmark = True, and I have tried many times, but the best results obtained were the ones mentioned above. Even if I ignore the warning, the performance gap still exists.

I used 4 GPUs and set the training batch size to 64 on ActivityNet. Is there anything else I should change in the code?

Looking forward to your reply.

Hello! I have the same problem as you. Have you solved it?