EGO4D / social-interactions

MIT License
45 stars 8 forks source link

Issue when running run.py for local model training #16

Open Uncertain-Quark opened 1 year ago

Uncertain-Quark commented 1 year ago

I run into the following error after downloading the dataset and trying to execute run.py locally. Please find the error log below:


Epoch: [0][0/7785]  Time 28.992 (28.992) Data 7.722 (7.722) Loss 0.6674 (0.6674)
Epoch: [0][100/7785] Time 0.573 (2.387) Data 0.022 (0.623) Loss 0.8501 (0.8702)
Epoch: [0][200/7785] Time 0.578 (2.296) Data 0.024 (0.978) Loss 0.8438 (0.8276)
Epoch: [0][300/7785] Time 0.568 (2.221) Data 0.019 (1.049) Loss 0.4548 (0.7814)
Epoch: [0][400/7785] Time 0.668 (2.149) Data 0.022 (1.072) Loss 0.9218 (0.7557)
/home1/python3.8/site-packages/numpy/core/fromnumeric.py:3372: RuntimeWarning: Mean of empty slice.
  return _methods._mean(a, axis=axis, dtype=dtype,
/home1/python3.8/site-packages/numpy/core/_methods.py:170: RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)
Traceback (most recent call last):
  File "run.py", line 136, in <module>
    run()
  File "run.py", line 132, in run
    main(args)
  File "run.py", line 80, in main
    train(train_loader, model, criterion, optimizer, epoch)
  File "/home1/social-interactions/common/engine.py", line 32, in train
    output = model(video, audio)
  File "/home1/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home1/social-interactions/model/model.py", line 48, in forward
    audio_out = self.audio_encoder(audio)
  File "/home1/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
File "/home1/social-interactions/model/resse.py", line 96, in forward
    x = self.torchfb(x) + 1e-6
  File "/home1/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home1/python3.8/site-packages/torch/nn/modules/container.py", line 204, in forward
    input = module(input)
  File "/home1/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home1/social-interactions/model/resse.py", line 190, in forward
    input = F.pad(input, (1, 0), 'reflect')
RuntimeError: 2D or 3D (batch mode) tensor expected for input, but got: [ torch.cuda.FloatTensor{26,1,0} ]

I am not sure what is causing the error. Would be great if you could help!