Open danmackinlay opened 4 years ago
As you said, you can always convert them after they have been calculated. Is there a benefit to having the window function do this conversion for you? All filtering and convolution methods currently accept Float32
arrays.
I guess accepting an element type would be consistent with many function signatures for array initialization.
There is absolutely a benefit; whether the benefit is worth the effort would be a better question. 😉
In my particular use case it might be greater relative benefit than others - I'm working on deep-learning-DSP stuff where everything is supposed to be Float32 on the GPU; allocating Float64 arrays then converting then freeing the Float64 is not ideal for performance. I can write some complicated caching infrastructure to avoid this step (there are a lot of different windows in my application), but it seems it would be simpler to simply windows to be Float32 etc in the first place. Also, an in-place makewindow!
that one could pass a CuArray to would probably not hurt in my particular context.
More generally, I suspect that even outside of the GPU context there is likely some performance benefit because of how much audio work is in Float32. But that might be more marginal.
It seems like either having rect!
, gaussian!
, hanning!
, etc. window functions that accepted a destination array would be pretty straightforward, then the conversion would happen when the array was assigned to. It would solve your memory allocation issue, but the actual window math would be happening as Float64
.
Doing everything Float32
would be a bit more complicated, as all the constants used in the window function, as well as the x
that's generated by makewindow
would all need to have their type be configurable.
For my work I'm generally not creating windows over and over again (I often generate a window once and then use it many times), so converting to a different type wouldn't do much for me. That said, I wouldn't be opposed to a PR making this more configurable.
Yes, I know that creating lots of windows is not common, so this is not a high priority for others. I'll put a PR on my todo list and see if I get to it. AFAICT it should be even feasible to get calculations Float32 end-to-end (am looking just at hanning right now) without much extra effort if I am going to get there; IIRC eltype(somearray)(3.0)
should be correctly optimized by the compiler.
the windows in
windows.jl
are always Float64. However, for many DSP applications this is not desirable - for example, audio data is usuallyFloat32
or even a fixed point type. Also for machine learning applications such as Flux.jl, Float32 is often desirable to keep models compact and to improve GPU compatibility (see also #301 )Ideally we should extend windows.jl to accept a type parameter rather than doing the calculations in
Float64
, then usingconvert
to get the desired type.