Closed int0thewind closed 1 year ago
Hi, thanks for raising this issue. I did some testing and found something. The following example runs without error on my M1 Mac, but only when the batch size is less than 16. When I set bs=16
, I get the same error as you reported. This does not appear to be a problem with auraloss, but instead a problem with the torch backend for CPU, specifically the convolution operation in NNPACK. For now, if you are using auraloss for evaluation on CPU, I would suggest using a smaller batch size to fix the issue. Let me know if that works.
import torch
import auraloss
bs = 2
chs = 1
seq_len = 44100
x = torch.randn(bs, chs, seq_len)
y = torch.randn(bs, chs, seq_len)
fir = auraloss.perceptual.FIRFilter()
x_out, y_out = fir(x, y)
print(x_out.shape, y_out.shape)
Thanks, Christian! Yes, the error would occur if the batch size is bigger or equal than 16 —interesting bug from PyTorch. Maybe I should raise it to PyTorch in the future.
Fix: the input tensor shall be three-dimensional instead of two. I fixed that in my initial post.
Hi!
As recommended by Välimäki et al., a pre-emphasis filter could be applied before applying ESR loss. A
auraloss.perceptual.FIRFilter
instance, however, cannot be successfully called when the PyTorch device is CPU. Interestingly, the instance can be called on a Nvidia CUDA device without any runtime error.Expected Behavior
auraloss.perceptual.FIRFilter
instance can be successfully called regardless of any device.Current Behavior
When calling an
auraloss.perceptual.FIRFilter
instance on CPU, a runtime error would be raised.Steps to Reproduce
auraloss.perceptual.FIRFilter
instance.FIRFilter
instance to CPU device.FIRFilter
instance with these two tensors as parameters.Context (Environment)
CPU: Apple M1 Max