ksanjeevan / crnn-audio-classification

UrbanSound classification using Convolutional Recurrent Networks in PyTorch
MIT License
383 stars 80 forks source link

RuntimeError: mat1 and mat2 shapes cannot be multiplied (24600x1 and 1025x128) #23

Open crescentxxx opened 1 year ago

crescentxxx commented 1 year ago

I just install all the pkgs needed and downloaded the dataset, and then I got

$ python run.py train -c myconfigs/config.json --cfg crnn.cfg
Compose(
    ProcessChannels(mode=avg)
    AdditiveNoise(prob=0.3, sig=0.001, dist_type=normal)
    RandomCropLength(prob=0.4, sig=0.25, dist_type=half)
    ToTensorAudio()
)
/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchaudio/transforms.py:917: UserWarning: torchaudio.transforms.ComplexNorm has been deprecated and will be removed from future release.Please convert the input Tensor to complex type with `torch.view_as_complex` then use `torch.abs` and `torch.angle`. Please refer to https://github.com/pytorch/audio/issues/1337 for more details about torchaudio's plan to migrate to native complex type.
  warnings.warn(
/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchparse-0.1-py3.9.egg/torchparse/utils.py:54: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  return (spatial + p2 - k)//s + 1
  0%|                                                               | 0/311 [00:00<?, ?it/s]/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/net/audio.py:10: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  return (lengths + 2 * pad - fft_length + hop_length) // hop_length
/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchaudio/transforms.py:936: UserWarning: torchaudio.functional.functional.complex_norm has been deprecated and will be removed from 0.11 release. Please convert the input Tensor to complex type with `torch.view_as_complex` then use `torch.abs`. Please refer to https://github.com/pytorch/audio/issues/1337 for more details about torchaudio's plan to migrate to native complex type.
  return F.complex_norm(complex_tensor, self.power)
  0%|                                                               | 0/311 [00:01<?, ?it/s]
Traceback (most recent call last):
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/run.py", line 175, in <module>
    train_main(config, args.resume)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/run.py", line 115, in train_main
    trainer.train()
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/train/base_trainer.py", line 88, in train
    result = self._train_epoch(epoch)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/train/trainer.py", line 68, in _train_epoch
    output = self.model(data)
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/net/model.py", line 52, in forward
    xt, lengths = self.spec(xt, lengths)                
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/net/audio.py", line 55, in forward
    x = self.mel_scale(x)
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchaudio/transforms.py", line 386, in forward
    mel_specgram = torch.matmul(specgram.transpose(-1, -2), self.fb).transpose(-1, -2)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (24600x1 and 1025x128)

which is sad

jessicachen01 commented 1 year ago

I just install all the pkgs needed and downloaded the dataset, and then I got

$ python run.py train -c myconfigs/config.json --cfg crnn.cfg
Compose(
    ProcessChannels(mode=avg)
    AdditiveNoise(prob=0.3, sig=0.001, dist_type=normal)
    RandomCropLength(prob=0.4, sig=0.25, dist_type=half)
    ToTensorAudio()
)
/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchaudio/transforms.py:917: UserWarning: torchaudio.transforms.ComplexNorm has been deprecated and will be removed from future release.Please convert the input Tensor to complex type with `torch.view_as_complex` then use `torch.abs` and `torch.angle`. Please refer to https://github.com/pytorch/audio/issues/1337 for more details about torchaudio's plan to migrate to native complex type.
  warnings.warn(
/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchparse-0.1-py3.9.egg/torchparse/utils.py:54: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  return (spatial + p2 - k)//s + 1
  0%|                                                               | 0/311 [00:00<?, ?it/s]/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/net/audio.py:10: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  return (lengths + 2 * pad - fft_length + hop_length) // hop_length
/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchaudio/transforms.py:936: UserWarning: torchaudio.functional.functional.complex_norm has been deprecated and will be removed from 0.11 release. Please convert the input Tensor to complex type with `torch.view_as_complex` then use `torch.abs`. Please refer to https://github.com/pytorch/audio/issues/1337 for more details about torchaudio's plan to migrate to native complex type.
  return F.complex_norm(complex_tensor, self.power)
  0%|                                                               | 0/311 [00:01<?, ?it/s]
Traceback (most recent call last):
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/run.py", line 175, in <module>
    train_main(config, args.resume)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/run.py", line 115, in train_main
    trainer.train()
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/train/base_trainer.py", line 88, in train
    result = self._train_epoch(epoch)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/train/trainer.py", line 68, in _train_epoch
    output = self.model(data)
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/net/model.py", line 52, in forward
    xt, lengths = self.spec(xt, lengths)                
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/net/audio.py", line 55, in forward
    x = self.mel_scale(x)
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchaudio/transforms.py", line 386, in forward
    mel_specgram = torch.matmul(specgram.transpose(-1, -2), self.fb).transpose(-1, -2)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (24600x1 and 1025x128)

which is sad

Hi, just come across with the same error, have you fixed it? Look forward to your reply.

cpperrpr commented 1 year ago

I just install all the pkgs needed and downloaded the dataset, and then I got

$ python run.py train -c myconfigs/config.json --cfg crnn.cfg
Compose(
    ProcessChannels(mode=avg)
    AdditiveNoise(prob=0.3, sig=0.001, dist_type=normal)
    RandomCropLength(prob=0.4, sig=0.25, dist_type=half)
    ToTensorAudio()
)
/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchaudio/transforms.py:917: UserWarning: torchaudio.transforms.ComplexNorm has been deprecated and will be removed from future release.Please convert the input Tensor to complex type with `torch.view_as_complex` then use `torch.abs` and `torch.angle`. Please refer to https://github.com/pytorch/audio/issues/1337 for more details about torchaudio's plan to migrate to native complex type.
  warnings.warn(
/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchparse-0.1-py3.9.egg/torchparse/utils.py:54: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  return (spatial + p2 - k)//s + 1
  0%|                                                               | 0/311 [00:00<?, ?it/s]/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/net/audio.py:10: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  return (lengths + 2 * pad - fft_length + hop_length) // hop_length
/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchaudio/transforms.py:936: UserWarning: torchaudio.functional.functional.complex_norm has been deprecated and will be removed from 0.11 release. Please convert the input Tensor to complex type with `torch.view_as_complex` then use `torch.abs`. Please refer to https://github.com/pytorch/audio/issues/1337 for more details about torchaudio's plan to migrate to native complex type.
  return F.complex_norm(complex_tensor, self.power)
  0%|                                                               | 0/311 [00:01<?, ?it/s]
Traceback (most recent call last):
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/run.py", line 175, in <module>
    train_main(config, args.resume)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/run.py", line 115, in train_main
    trainer.train()
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/train/base_trainer.py", line 88, in train
    result = self._train_epoch(epoch)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/train/trainer.py", line 68, in _train_epoch
    output = self.model(data)
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/net/model.py", line 52, in forward
    xt, lengths = self.spec(xt, lengths)                
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/net/audio.py", line 55, in forward
    x = self.mel_scale(x)
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchaudio/transforms.py", line 386, in forward
    mel_specgram = torch.matmul(specgram.transpose(-1, -2), self.fb).transpose(-1, -2)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (24600x1 and 1025x128)

which is sad

Hi, just come across with the same error, have you fixed it? Look forward to your reply.

Hi, I just come across with the same error too, have you fixed it? Look forward to your reply.

crescentxxx commented 1 year ago

@cpperrpr dude, it's too long ago, I completely forget what happened, I'm sorry can't help;(

zhonghao1997 commented 1 week ago

Hi, just come across with the same error, have you fixed it? Look forward to your reply.

Hi, just come across with the same error, have you fixed it? Look forward to your reply.

zhonghao1997 commented 1 week ago

I just install all the pkgs needed and downloaded the dataset, and then I got

$ python run.py train -c myconfigs/config.json --cfg crnn.cfg
Compose(
    ProcessChannels(mode=avg)
    AdditiveNoise(prob=0.3, sig=0.001, dist_type=normal)
    RandomCropLength(prob=0.4, sig=0.25, dist_type=half)
    ToTensorAudio()
)
/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchaudio/transforms.py:917: UserWarning: torchaudio.transforms.ComplexNorm has been deprecated and will be removed from future release.Please convert the input Tensor to complex type with `torch.view_as_complex` then use `torch.abs` and `torch.angle`. Please refer to https://github.com/pytorch/audio/issues/1337 for more details about torchaudio's plan to migrate to native complex type.
  warnings.warn(
/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchparse-0.1-py3.9.egg/torchparse/utils.py:54: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  return (spatial + p2 - k)//s + 1
  0%|                                                               | 0/311 [00:00<?, ?it/s]/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/net/audio.py:10: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  return (lengths + 2 * pad - fft_length + hop_length) // hop_length
/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchaudio/transforms.py:936: UserWarning: torchaudio.functional.functional.complex_norm has been deprecated and will be removed from 0.11 release. Please convert the input Tensor to complex type with `torch.view_as_complex` then use `torch.abs`. Please refer to https://github.com/pytorch/audio/issues/1337 for more details about torchaudio's plan to migrate to native complex type.
  return F.complex_norm(complex_tensor, self.power)
  0%|                                                               | 0/311 [00:01<?, ?it/s]
Traceback (most recent call last):
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/run.py", line 175, in <module>
    train_main(config, args.resume)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/run.py", line 115, in train_main
    trainer.train()
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/train/base_trainer.py", line 88, in train
    result = self._train_epoch(epoch)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/train/trainer.py", line 68, in _train_epoch
    output = self.model(data)
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/net/model.py", line 52, in forward
    xt, lengths = self.spec(xt, lengths)                
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/net/audio.py", line 55, in forward
    x = self.mel_scale(x)
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchaudio/transforms.py", line 386, in forward
    mel_specgram = torch.matmul(specgram.transpose(-1, -2), self.fb).transpose(-1, -2)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (24600x1 and 1025x128)

which is sad

Hi, just come across with the same error, have you fixed it? Look forward to your reply.

Hi, I just come across with the same error too, have you fixed it? Look forward to your reply.

have you fix it?

crescentxxx commented 1 week ago

I just install all the pkgs needed and downloaded the dataset, and then I got

$ python run.py train -c myconfigs/config.json --cfg crnn.cfg
Compose(
    ProcessChannels(mode=avg)
    AdditiveNoise(prob=0.3, sig=0.001, dist_type=normal)
    RandomCropLength(prob=0.4, sig=0.25, dist_type=half)
    ToTensorAudio()
)
/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchaudio/transforms.py:917: UserWarning: torchaudio.transforms.ComplexNorm has been deprecated and will be removed from future release.Please convert the input Tensor to complex type with `torch.view_as_complex` then use `torch.abs` and `torch.angle`. Please refer to https://github.com/pytorch/audio/issues/1337 for more details about torchaudio's plan to migrate to native complex type.
  warnings.warn(
/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchparse-0.1-py3.9.egg/torchparse/utils.py:54: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  return (spatial + p2 - k)//s + 1
  0%|                                                               | 0/311 [00:00<?, ?it/s]/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/net/audio.py:10: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  return (lengths + 2 * pad - fft_length + hop_length) // hop_length
/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchaudio/transforms.py:936: UserWarning: torchaudio.functional.functional.complex_norm has been deprecated and will be removed from 0.11 release. Please convert the input Tensor to complex type with `torch.view_as_complex` then use `torch.abs`. Please refer to https://github.com/pytorch/audio/issues/1337 for more details about torchaudio's plan to migrate to native complex type.
  return F.complex_norm(complex_tensor, self.power)
  0%|                                                               | 0/311 [00:01<?, ?it/s]
Traceback (most recent call last):
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/run.py", line 175, in <module>
    train_main(config, args.resume)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/run.py", line 115, in train_main
    trainer.train()
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/train/base_trainer.py", line 88, in train
    result = self._train_epoch(epoch)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/train/trainer.py", line 68, in _train_epoch
    output = self.model(data)
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/net/model.py", line 52, in forward
    xt, lengths = self.spec(xt, lengths)                
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gpu-server/disk/disk1/xuxin_workspace/projects/crnn-audio-classification-master/net/audio.py", line 55, in forward
    x = self.mel_scale(x)
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/gpu-server/anaconda3/envs/audio_cls/lib/python3.9/site-packages/torchaudio/transforms.py", line 386, in forward
    mel_specgram = torch.matmul(specgram.transpose(-1, -2), self.fb).transpose(-1, -2)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (24600x1 and 1025x128)

which is sad

Hi, just come across with the same error, have you fixed it? Look forward to your reply.

Hi, I just come across with the same error too, have you fixed it? Look forward to your reply.

have you fix it?

Hi, I think I fixed the problem. But since it has been a long time, so I kind of forgot what did I do, sry can't help.