Closed sneakers-the-rat closed 3 years ago
We chose the behavior of the multi-channel filter on the multi-channel sound based on our use cases but you are right in that it may be counterintuitive to outsiders. Pointing to additional info regarding the recording of impulse responses is a great idea!
Yes i think both the "compound filter" (have no idea if that is even a term) and multichannel modes are both valid -- didn't mean to muddy the waters, just giving an outside perspective (i usually only work with one or two channel audio!), so no worries if you want to keep it as is no changes, i certainly won't be offended :)
Ok so after thinking about it for a while we came to the conclusion that the clearest way is to not add any arguments to the apply function. Instead one can apply a bank of filters to one sound in a for loop, like:
for i in range(filter_bank.n_channels):
sound = filter_bank.channel(i).apply(sound)
I will add an example of this to the documentation
I could not find a proper source for a guide on recording transfer functions. The ones I found where either way to technical for our audience or instructions for some commercial software. I made some slight modifications in the docs and i think the info is detailed enough for our purposes
Thanks for the suggestion. filter.band uses scipy.signal.firwin2 under the hood, and that function does not have any kwargs that we are not already using, i.e. there is nothing more to set. However, there are some situations in which one may want to set things like the filter order directly and use, for instance, a Butterworth filter. IIR filters are currently not supported by the Filter, but a one-off method in the Sound class (or extension of Sound.filter) would work. We'll consider that. We have addressed the other points in the last commits.
thanks for considering these, still getting used to the structure of this new style of review, i just meant these as food for thought to take or leave -- it's your package and your design, so whatever you think is best by all means do it :).
the syntax for applying compound filters is a little confusing to me, but if you've got an example, it's the canonical way to do it with your package, and the ambiguity is articulated away i think that sounds lovely :)
for measuring an impulse response I think the typical way is to play a pulse (whether that's a literal 0 - 1 single bit pulse or a longer white-noise pulse) and the following should be the impulse response in the time domain, the other way I know of is to play a sine sweep or some other flat sound, measure the emitted sound, and take the difference in the fft domain. but yeah maybe a little out of scope, if you can't find something to link to then i think just a link to the wiki article for impulse response would be fine.
and yes my comments re **kwargs
-ing things mostly are oriented towards keeping things maintainable for y'all so you don't end up with a zillion arguments in the wrapper functions/methods, but since there is a reasonable amount of additional logic in your functions i think that's less relevant.
Going to group my optional comments here (as i don't really anticipate having any serious issues with the rest of the package)
seems like
Filter.band
could take**kwargs
and pass them to the scipy filter-generating function, but I didn't look to closely at how this was implemented to see how hard that would be.the axes are mislabeled here for
tf()
- looks like x and y labels should be swapped:The behavior of multiple filters is the opposite of what i would expect. I think this is probably "just a me thing" but i think it points at a place where the docs could walk all the "just me"s in the world along :) -- I would expect multiple filters to operate as a compound filter, so i could eg. make a double bandpass filter, so applying it to a multichannel sound should apply both filters to both channels. I think the intended way of doing this is to provide a list of frequency to a single filter object, right? Currently it applies each filter to the corresponding channel (as described correctly in the docs). I would want some option to be able to use it the former way, so an arg like
each=True
inapply
would resolve the ambiguity to me. The next section describes how to re-combine the bands, but creating a multichannel sound, applying a filter to each of its channels, and then combining each of its subbands is less intuitive to me than being able to choose whether to apply a multichannel filter as a compound/sequential/what-have-you filter or to split it across the channels. I see the utility of the splitting method, as demonstrated with modifying the bands independently before recombining, so definitely not saying get rid of it, just a little sugar to sweeten the method :).the noise vocoded speech example isn't reproducible --
signal
isn't declared, seems like it would be good to include a quick example .wav in the docs or else explain how to getsignal
. I see that it's the contents of thevocode
method, but that seems odd given the rest of the docs are examples. ps i think you mean "omitted" rather than "redacted" ;)I think that the example of equalizing at the end here is adequate for our purposes here, but i would strongly suggest giving some further example of how one might measure the impulse response of their audio setup. It could be as simple as linking to some youtube video, or the wiki page for "impulse response" but if I were a naive reader who had never measured an impulse response I wouldn't really know what to do here.
the "equalization" header is repeated twice.