Closed glaurent closed 8 years ago
Awesome, thanks for your contribution @glaurent!
you're welcome
@glaurent can you point me to any resources that explain the level threshold concept and how the values should be interpreted?
Thanks,
Paul
@PCrompton it's quite simple : the InputSignalTracker analyses the audio signal received from the device's microphone. This signal has a level. My patch simply lets you set a threshold under which the signal will not be analysed. This is a simple way to filter out low level noise and avoid recognizing "random" notes.
@glaurent I understood that much, and I noticed that you set the guitar tuner's threshold to -30.0. I played around with different values and found that above -20.0 isn't all that helpful. Does setting the threshold help to filter out harmonics? I'm having trouble below A3 detecting the correct octave. It usually interprets it as the first or second partial. For instance I play G3 and it interprets it as D5, or I play F3 and it returns F4. This is with the levelThreshold = -30.0
and let config = Config(bufferSize: 4096, transformStrategy: .fft, estimationStrategy: .hps, audioURL: nil)
@PCrompton you'll have to play with the setting, depending on how close the device is to the sound source.
This branch implements a "signal level threshold" feature which lets the user set a signal level below which no pitch detection is attempted. This avoids the PitchEngineDelegate to be called if there's silence or near silence.
Limitation : this feature is only implemented for the InputSignalTracker, not the OutputSignalTracker. I suppose it's possible, but since I don't know much about AVFoundation I'm not sure how to do it. It's also likely that my implementation in InputSignalTracker could be improved.
Also, there should be a way to get the peak/average levels, so the user can set the threshold without guessing.
This branch also updates the Podfile of the GuitarTuner example.