I noticed that in the FIRFilter layer, you create a conv1d layer, but then use a separate conv1d function with the kernel of the conv1d layer in the forward pass. Is there a reason you did this rather than either using the conv1d layer directly or registering taps as a buffer and using that as the weight?
That is a fair question. Both ways should work. Perhaps creating a tensor and registering it as a buffer is the cleaner solution. However, they should both be functionally the same.
I noticed that in the
FIRFilter
layer, you create aconv1d
layer, but then use a separateconv1d
function with the kernel of theconv1d
layer in the forward pass. Is there a reason you did this rather than either using the conv1d layer directly or registeringtaps
as a buffer and using that as the weight?