Thank you very much for your data set and code.
I encountered this problem when training the model:
Traceback (most recent call last):
File "F:/py_pro/MM-DistillNet-main/sec/optimization/train_methods.py", line 318, in
logits_s, features_s = self.student_model(audio)
File "D:\ProgramData\Anaconda3\envs\MM-DistillNet-main\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, kwargs)
File "F:\pypro\MM-DistillNet-main\src\YetAnotherEfficientDet.py", line 670, in forward
, p3, p4, p5 = self.backbone_net(inputs)
File "D:\ProgramData\Anaconda3\envs\MM-DistillNet-main\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, *kwargs)
File "F:\py_pro\MM-DistillNet-main\src\YetAnotherEfficientDet.py", line 556, in forward
x = self.model._conv_stem(x)
File "D:\ProgramData\Anaconda3\envs\MM-DistillNet-main\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(input, kwargs)
File "F:\py_pro\MM-DistillNet-main\src\YetAnotherEfficientNet.py", line 54, in forward
x = F.pad(x, [left, right, top, bottom])
File "D:\ProgramData\Anaconda3\envs\MM-DistillNet-main\lib\site-packages\torch\nn\functional.py", line 3998, in _pad
assert len(pad) // 2 <= input.dim(), "Padding length too large"
RuntimeError:Input type (torch.cuda.DoubleTensor) and weight type (torch.cuda.FloatTensor) should be the same.
I can't solve this problem. Did I make an error in processing audio files.
Thank you very much for your data set and code. I encountered this problem when training the model: Traceback (most recent call last): File "F:/py_pro/MM-DistillNet-main/sec/optimization/train_methods.py", line 318, in
logits_s, features_s = self.student_model(audio)
File "D:\ProgramData\Anaconda3\envs\MM-DistillNet-main\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, kwargs)
File "F:\pypro\MM-DistillNet-main\src\YetAnotherEfficientDet.py", line 670, in forward
, p3, p4, p5 = self.backbone_net(inputs)
File "D:\ProgramData\Anaconda3\envs\MM-DistillNet-main\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, *kwargs)
File "F:\py_pro\MM-DistillNet-main\src\YetAnotherEfficientDet.py", line 556, in forward
x = self.model._conv_stem(x)
File "D:\ProgramData\Anaconda3\envs\MM-DistillNet-main\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(input, kwargs)
File "F:\py_pro\MM-DistillNet-main\src\YetAnotherEfficientNet.py", line 54, in forward
x = F.pad(x, [left, right, top, bottom])
File "D:\ProgramData\Anaconda3\envs\MM-DistillNet-main\lib\site-packages\torch\nn\functional.py", line 3998, in _pad
assert len(pad) // 2 <= input.dim(), "Padding length too large"
RuntimeError:Input type (torch.cuda.DoubleTensor) and weight type (torch.cuda.FloatTensor) should be the same.
I can't solve this problem. Did I make an error in processing audio files.