chenjoya / 2dtan

An optimized re-implementation for 2D-TAN: Learning 2D Temporal Localization Networks for Moment Localization with Natural Language (AAAI'2020).
125 stars 15 forks source link

IndexError: index 1 is out of bounds for dimension 1 with size 1 #5

Open balabanahei opened 4 years ago

balabanahei commented 4 years ago

The error is :

Traceback (most recent call last): File "/home/yu/.jupyter/ngsv/2dtan/train_net.py", line 161, in main() File "/home/yu/.jupyter/ngsv/2dtan/train_net.py", line 155, in main model = train(cfg, args.local_rank, args.distributed) File "/home/yu/.jupyter/ngsv/2dtan/train_net.py", line 75, in train arguments, File "/home/yu/.jupyter/ngsv/2dtan/tan/engine/trainer.py", line 113, in do_train device=cfg.MODEL.DEVICE, File "/home/yu/.jupyter/ngsv/2dtan/tan/engine/inference.py", line 94, in inference return evaluate(dataset=dataset, predictions=predictions, nms_thresh=nms_thresh) File "/home/yu/.jupyter/ngsv/2dtan/tan/data/datasets/evaluation.py", line 45, in evaluate candidates, scores = score2d_to_moments_scores(score2d, num_clips, duration) File "/home/yu/.jupyter/ngsv/2dtan/tan/data/datasets/utils.py", line 20, in score2d_to_moments_scores scores = score2d[grids[:,0], grids[:,1]] IndexError: index 1 is out of bounds for dimension 1 with size 1

How can I fix it? I use torch==1.3.1 and cuda==10.2. What's your version? Thanks!

ycWang9725 commented 4 years ago

Hi, I got same error on torch1.1, and on torch1.5 it disappeared. Hope to help:)

DW-Lay commented 3 years ago

the dimension error sometimes it's due to the "squeeze()_" operation cause unexpected result ,you can check it in your code carefully

wjn922 commented 3 years ago

I have the same problem. In some cases, the dim of prediction would become 1d instead of 2d. How to solve this problem?

DW-Lay commented 3 years ago

I eliminated a certain dimension in the process of squeeze(). I suggest you carefully check whether there are any related problems during the related operation. You should judge this

Zzz512 commented 3 years ago

Hello,i meet the same problem when i train Tacos and i solved it by reset batchsize to 33 or other numbers.It may be generated because tacos training datasets count to 4001 and when batchsize are [16,32,64,etc.] the last batch always have only 1 sample,iterator then get an unnomal tensor with lacking of one dimension.